May 14 00:36:24.712147 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 14 00:36:24.712167 kernel: Linux version 5.15.181-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Tue May 13 23:17:31 -00 2025 May 14 00:36:24.712175 kernel: efi: EFI v2.70 by EDK II May 14 00:36:24.712181 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 May 14 00:36:24.712186 kernel: random: crng init done May 14 00:36:24.712192 kernel: ACPI: Early table checksum verification disabled May 14 00:36:24.712198 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) May 14 00:36:24.712205 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) May 14 00:36:24.712211 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:36:24.712216 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:36:24.712222 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:36:24.712227 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:36:24.712232 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:36:24.712238 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:36:24.712245 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:36:24.712251 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:36:24.712257 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:36:24.712263 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 14 00:36:24.712268 kernel: NUMA: Failed to initialise from firmware May 14 00:36:24.712274 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 14 00:36:24.712280 kernel: NUMA: NODE_DATA [mem 0xdcb0c900-0xdcb11fff] May 14 00:36:24.712286 kernel: Zone ranges: May 14 00:36:24.712291 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 14 00:36:24.712298 kernel: DMA32 empty May 14 00:36:24.712303 kernel: Normal empty May 14 00:36:24.712309 kernel: Movable zone start for each node May 14 00:36:24.712314 kernel: Early memory node ranges May 14 00:36:24.712320 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] May 14 00:36:24.712326 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] May 14 00:36:24.712332 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] May 14 00:36:24.712353 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] May 14 00:36:24.712360 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] May 14 00:36:24.712366 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] May 14 00:36:24.712371 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] May 14 00:36:24.712377 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 14 00:36:24.712384 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 14 00:36:24.712390 kernel: psci: probing for conduit method from ACPI. May 14 00:36:24.712396 kernel: psci: PSCIv1.1 detected in firmware. May 14 00:36:24.712402 kernel: psci: Using standard PSCI v0.2 function IDs May 14 00:36:24.712408 kernel: psci: Trusted OS migration not required May 14 00:36:24.712416 kernel: psci: SMC Calling Convention v1.1 May 14 00:36:24.712422 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 14 00:36:24.712429 kernel: ACPI: SRAT not present May 14 00:36:24.712435 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 May 14 00:36:24.712442 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 May 14 00:36:24.712448 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 14 00:36:24.712454 kernel: Detected PIPT I-cache on CPU0 May 14 00:36:24.712460 kernel: CPU features: detected: GIC system register CPU interface May 14 00:36:24.712466 kernel: CPU features: detected: Hardware dirty bit management May 14 00:36:24.712486 kernel: CPU features: detected: Spectre-v4 May 14 00:36:24.712493 kernel: CPU features: detected: Spectre-BHB May 14 00:36:24.712500 kernel: CPU features: kernel page table isolation forced ON by KASLR May 14 00:36:24.712506 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 14 00:36:24.712512 kernel: CPU features: detected: ARM erratum 1418040 May 14 00:36:24.712518 kernel: CPU features: detected: SSBS not fully self-synchronizing May 14 00:36:24.712524 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 14 00:36:24.712530 kernel: Policy zone: DMA May 14 00:36:24.712537 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=412b3b42de04d7d5abb18ecf506be3ad2c72d6425f1b2391aa97d359e8bd9923 May 14 00:36:24.712543 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 14 00:36:24.712549 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 14 00:36:24.712555 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 14 00:36:24.712561 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 14 00:36:24.712569 kernel: Memory: 2457344K/2572288K available (9792K kernel code, 2094K rwdata, 7584K rodata, 36480K init, 777K bss, 114944K reserved, 0K cma-reserved) May 14 00:36:24.712575 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 14 00:36:24.712581 kernel: trace event string verifier disabled May 14 00:36:24.712587 kernel: rcu: Preemptible hierarchical RCU implementation. May 14 00:36:24.712593 kernel: rcu: RCU event tracing is enabled. May 14 00:36:24.712600 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 14 00:36:24.712606 kernel: Trampoline variant of Tasks RCU enabled. May 14 00:36:24.712612 kernel: Tracing variant of Tasks RCU enabled. May 14 00:36:24.712618 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 14 00:36:24.712624 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 14 00:36:24.712630 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 14 00:36:24.712637 kernel: GICv3: 256 SPIs implemented May 14 00:36:24.712643 kernel: GICv3: 0 Extended SPIs implemented May 14 00:36:24.712656 kernel: GICv3: Distributor has no Range Selector support May 14 00:36:24.712662 kernel: Root IRQ handler: gic_handle_irq May 14 00:36:24.712668 kernel: GICv3: 16 PPIs implemented May 14 00:36:24.712674 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 14 00:36:24.712680 kernel: ACPI: SRAT not present May 14 00:36:24.712686 kernel: ITS [mem 0x08080000-0x0809ffff] May 14 00:36:24.712693 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) May 14 00:36:24.712699 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) May 14 00:36:24.712705 kernel: GICv3: using LPI property table @0x00000000400d0000 May 14 00:36:24.712711 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 May 14 00:36:24.712718 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:36:24.712724 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 14 00:36:24.712731 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 14 00:36:24.712737 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 14 00:36:24.712743 kernel: arm-pv: using stolen time PV May 14 00:36:24.712750 kernel: Console: colour dummy device 80x25 May 14 00:36:24.712756 kernel: ACPI: Core revision 20210730 May 14 00:36:24.712762 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 14 00:36:24.712769 kernel: pid_max: default: 32768 minimum: 301 May 14 00:36:24.712775 kernel: LSM: Security Framework initializing May 14 00:36:24.712782 kernel: SELinux: Initializing. May 14 00:36:24.712788 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 00:36:24.712794 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 00:36:24.712800 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 14 00:36:24.712807 kernel: rcu: Hierarchical SRCU implementation. May 14 00:36:24.712813 kernel: Platform MSI: ITS@0x8080000 domain created May 14 00:36:24.712819 kernel: PCI/MSI: ITS@0x8080000 domain created May 14 00:36:24.712825 kernel: Remapping and enabling EFI services. May 14 00:36:24.712831 kernel: smp: Bringing up secondary CPUs ... May 14 00:36:24.712838 kernel: Detected PIPT I-cache on CPU1 May 14 00:36:24.712845 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 14 00:36:24.712851 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 May 14 00:36:24.712857 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:36:24.712863 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 14 00:36:24.712870 kernel: Detected PIPT I-cache on CPU2 May 14 00:36:24.712883 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 14 00:36:24.712889 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 May 14 00:36:24.712896 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:36:24.712902 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 14 00:36:24.712909 kernel: Detected PIPT I-cache on CPU3 May 14 00:36:24.712915 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 14 00:36:24.712922 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 May 14 00:36:24.712928 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:36:24.712939 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 14 00:36:24.712946 kernel: smp: Brought up 1 node, 4 CPUs May 14 00:36:24.712952 kernel: SMP: Total of 4 processors activated. May 14 00:36:24.712959 kernel: CPU features: detected: 32-bit EL0 Support May 14 00:36:24.712966 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 14 00:36:24.712972 kernel: CPU features: detected: Common not Private translations May 14 00:36:24.712979 kernel: CPU features: detected: CRC32 instructions May 14 00:36:24.712985 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 14 00:36:24.712993 kernel: CPU features: detected: LSE atomic instructions May 14 00:36:24.712999 kernel: CPU features: detected: Privileged Access Never May 14 00:36:24.713006 kernel: CPU features: detected: RAS Extension Support May 14 00:36:24.713012 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 14 00:36:24.713019 kernel: CPU: All CPU(s) started at EL1 May 14 00:36:24.713026 kernel: alternatives: patching kernel code May 14 00:36:24.713033 kernel: devtmpfs: initialized May 14 00:36:24.713039 kernel: KASLR enabled May 14 00:36:24.713046 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 14 00:36:24.713052 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 14 00:36:24.713059 kernel: pinctrl core: initialized pinctrl subsystem May 14 00:36:24.713065 kernel: SMBIOS 3.0.0 present. May 14 00:36:24.713072 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 May 14 00:36:24.713078 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 14 00:36:24.713086 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 14 00:36:24.713093 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 14 00:36:24.713099 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 14 00:36:24.713106 kernel: audit: initializing netlink subsys (disabled) May 14 00:36:24.713113 kernel: audit: type=2000 audit(0.030:1): state=initialized audit_enabled=0 res=1 May 14 00:36:24.713119 kernel: thermal_sys: Registered thermal governor 'step_wise' May 14 00:36:24.713126 kernel: cpuidle: using governor menu May 14 00:36:24.713132 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 14 00:36:24.713139 kernel: ASID allocator initialised with 32768 entries May 14 00:36:24.713146 kernel: ACPI: bus type PCI registered May 14 00:36:24.713153 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 14 00:36:24.713159 kernel: Serial: AMBA PL011 UART driver May 14 00:36:24.713166 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 14 00:36:24.713172 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages May 14 00:36:24.713179 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 14 00:36:24.713185 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages May 14 00:36:24.713192 kernel: cryptd: max_cpu_qlen set to 1000 May 14 00:36:24.713199 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 14 00:36:24.713207 kernel: ACPI: Added _OSI(Module Device) May 14 00:36:24.713214 kernel: ACPI: Added _OSI(Processor Device) May 14 00:36:24.713220 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 14 00:36:24.713226 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 14 00:36:24.713233 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 14 00:36:24.713240 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 14 00:36:24.713246 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 14 00:36:24.713253 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 14 00:36:24.713260 kernel: ACPI: Interpreter enabled May 14 00:36:24.713268 kernel: ACPI: Using GIC for interrupt routing May 14 00:36:24.713274 kernel: ACPI: MCFG table detected, 1 entries May 14 00:36:24.713281 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 14 00:36:24.713288 kernel: printk: console [ttyAMA0] enabled May 14 00:36:24.713294 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 14 00:36:24.713408 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 14 00:36:24.713469 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 14 00:36:24.713527 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 14 00:36:24.713582 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 14 00:36:24.713637 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 14 00:36:24.713653 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 14 00:36:24.713660 kernel: PCI host bridge to bus 0000:00 May 14 00:36:24.713724 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 14 00:36:24.713780 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 14 00:36:24.713832 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 14 00:36:24.713908 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 14 00:36:24.713981 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 14 00:36:24.714048 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 14 00:36:24.714109 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 14 00:36:24.714168 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 14 00:36:24.714225 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 14 00:36:24.714284 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 14 00:36:24.714342 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 14 00:36:24.714399 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 14 00:36:24.714451 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 14 00:36:24.714501 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 14 00:36:24.714552 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 14 00:36:24.714560 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 14 00:36:24.714567 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 14 00:36:24.714575 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 14 00:36:24.714582 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 14 00:36:24.714588 kernel: iommu: Default domain type: Translated May 14 00:36:24.714595 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 14 00:36:24.714601 kernel: vgaarb: loaded May 14 00:36:24.714608 kernel: pps_core: LinuxPPS API ver. 1 registered May 14 00:36:24.714615 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 14 00:36:24.714621 kernel: PTP clock support registered May 14 00:36:24.714627 kernel: Registered efivars operations May 14 00:36:24.714635 kernel: clocksource: Switched to clocksource arch_sys_counter May 14 00:36:24.714642 kernel: VFS: Disk quotas dquot_6.6.0 May 14 00:36:24.714655 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 14 00:36:24.714661 kernel: pnp: PnP ACPI init May 14 00:36:24.714728 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 14 00:36:24.714738 kernel: pnp: PnP ACPI: found 1 devices May 14 00:36:24.714744 kernel: NET: Registered PF_INET protocol family May 14 00:36:24.714751 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 14 00:36:24.714760 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 14 00:36:24.714766 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 14 00:36:24.714773 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 14 00:36:24.714780 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 14 00:36:24.714786 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 14 00:36:24.714793 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 00:36:24.714799 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 00:36:24.714806 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 14 00:36:24.714812 kernel: PCI: CLS 0 bytes, default 64 May 14 00:36:24.714820 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 14 00:36:24.714826 kernel: kvm [1]: HYP mode not available May 14 00:36:24.714833 kernel: Initialise system trusted keyrings May 14 00:36:24.714839 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 14 00:36:24.714846 kernel: Key type asymmetric registered May 14 00:36:24.714852 kernel: Asymmetric key parser 'x509' registered May 14 00:36:24.714858 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 14 00:36:24.714865 kernel: io scheduler mq-deadline registered May 14 00:36:24.714871 kernel: io scheduler kyber registered May 14 00:36:24.714886 kernel: io scheduler bfq registered May 14 00:36:24.714893 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 14 00:36:24.714899 kernel: ACPI: button: Power Button [PWRB] May 14 00:36:24.714906 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 14 00:36:24.714966 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 14 00:36:24.714975 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 14 00:36:24.714981 kernel: thunder_xcv, ver 1.0 May 14 00:36:24.714988 kernel: thunder_bgx, ver 1.0 May 14 00:36:24.714994 kernel: nicpf, ver 1.0 May 14 00:36:24.715002 kernel: nicvf, ver 1.0 May 14 00:36:24.715066 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 14 00:36:24.715121 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-14T00:36:24 UTC (1747182984) May 14 00:36:24.715130 kernel: hid: raw HID events driver (C) Jiri Kosina May 14 00:36:24.715136 kernel: NET: Registered PF_INET6 protocol family May 14 00:36:24.715143 kernel: Segment Routing with IPv6 May 14 00:36:24.715150 kernel: In-situ OAM (IOAM) with IPv6 May 14 00:36:24.715156 kernel: NET: Registered PF_PACKET protocol family May 14 00:36:24.715164 kernel: Key type dns_resolver registered May 14 00:36:24.715171 kernel: registered taskstats version 1 May 14 00:36:24.715177 kernel: Loading compiled-in X.509 certificates May 14 00:36:24.715184 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.181-flatcar: 7727f4e7680a5b8534f3d5e7bb84b1f695e8c34b' May 14 00:36:24.715191 kernel: Key type .fscrypt registered May 14 00:36:24.715197 kernel: Key type fscrypt-provisioning registered May 14 00:36:24.715204 kernel: ima: No TPM chip found, activating TPM-bypass! May 14 00:36:24.715210 kernel: ima: Allocated hash algorithm: sha1 May 14 00:36:24.715217 kernel: ima: No architecture policies found May 14 00:36:24.715224 kernel: clk: Disabling unused clocks May 14 00:36:24.715231 kernel: Freeing unused kernel memory: 36480K May 14 00:36:24.715237 kernel: Run /init as init process May 14 00:36:24.715244 kernel: with arguments: May 14 00:36:24.715250 kernel: /init May 14 00:36:24.715256 kernel: with environment: May 14 00:36:24.715262 kernel: HOME=/ May 14 00:36:24.715268 kernel: TERM=linux May 14 00:36:24.715275 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 14 00:36:24.715284 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 14 00:36:24.715293 systemd[1]: Detected virtualization kvm. May 14 00:36:24.715300 systemd[1]: Detected architecture arm64. May 14 00:36:24.715307 systemd[1]: Running in initrd. May 14 00:36:24.715313 systemd[1]: No hostname configured, using default hostname. May 14 00:36:24.715320 systemd[1]: Hostname set to . May 14 00:36:24.715327 systemd[1]: Initializing machine ID from VM UUID. May 14 00:36:24.715335 systemd[1]: Queued start job for default target initrd.target. May 14 00:36:24.715342 systemd[1]: Started systemd-ask-password-console.path. May 14 00:36:24.715349 systemd[1]: Reached target cryptsetup.target. May 14 00:36:24.715356 systemd[1]: Reached target paths.target. May 14 00:36:24.715362 systemd[1]: Reached target slices.target. May 14 00:36:24.715369 systemd[1]: Reached target swap.target. May 14 00:36:24.715376 systemd[1]: Reached target timers.target. May 14 00:36:24.715383 systemd[1]: Listening on iscsid.socket. May 14 00:36:24.715392 systemd[1]: Listening on iscsiuio.socket. May 14 00:36:24.715401 systemd[1]: Listening on systemd-journald-audit.socket. May 14 00:36:24.715409 systemd[1]: Listening on systemd-journald-dev-log.socket. May 14 00:36:24.715415 systemd[1]: Listening on systemd-journald.socket. May 14 00:36:24.715422 systemd[1]: Listening on systemd-networkd.socket. May 14 00:36:24.715429 systemd[1]: Listening on systemd-udevd-control.socket. May 14 00:36:24.715437 systemd[1]: Listening on systemd-udevd-kernel.socket. May 14 00:36:24.715444 systemd[1]: Reached target sockets.target. May 14 00:36:24.715452 systemd[1]: Starting kmod-static-nodes.service... May 14 00:36:24.715459 systemd[1]: Finished network-cleanup.service. May 14 00:36:24.715466 systemd[1]: Starting systemd-fsck-usr.service... May 14 00:36:24.715472 systemd[1]: Starting systemd-journald.service... May 14 00:36:24.715479 systemd[1]: Starting systemd-modules-load.service... May 14 00:36:24.715486 systemd[1]: Starting systemd-resolved.service... May 14 00:36:24.715493 systemd[1]: Starting systemd-vconsole-setup.service... May 14 00:36:24.715500 systemd[1]: Finished kmod-static-nodes.service. May 14 00:36:24.715507 systemd[1]: Finished systemd-fsck-usr.service. May 14 00:36:24.715515 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 14 00:36:24.715522 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 14 00:36:24.715529 kernel: audit: type=1130 audit(1747182984.712:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:24.715539 systemd-journald[290]: Journal started May 14 00:36:24.715578 systemd-journald[290]: Runtime Journal (/run/log/journal/4c7b588510c34383b6ff980dbfeaefe8) is 6.0M, max 48.7M, 42.6M free. May 14 00:36:24.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:24.708256 systemd-modules-load[291]: Inserted module 'overlay' May 14 00:36:24.716985 systemd[1]: Started systemd-journald.service. May 14 00:36:24.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:24.717757 systemd[1]: Finished systemd-vconsole-setup.service. May 14 00:36:24.720523 kernel: audit: type=1130 audit(1747182984.716:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:24.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:24.720568 systemd[1]: Starting dracut-cmdline-ask.service... May 14 00:36:24.723414 kernel: audit: type=1130 audit(1747182984.719:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:24.733904 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 14 00:36:24.737528 systemd-resolved[292]: Positive Trust Anchors: May 14 00:36:24.738350 kernel: Bridge firewalling registered May 14 00:36:24.737544 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 00:36:24.737572 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 14 00:36:24.737829 systemd-modules-load[291]: Inserted module 'br_netfilter' May 14 00:36:24.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:24.741825 systemd-resolved[292]: Defaulting to hostname 'linux'. May 14 00:36:24.748855 kernel: audit: type=1130 audit(1747182984.742:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:24.748885 kernel: audit: type=1130 audit(1747182984.745:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:24.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:24.742525 systemd[1]: Started systemd-resolved.service. May 14 00:36:24.743999 systemd[1]: Finished dracut-cmdline-ask.service. May 14 00:36:24.746683 systemd[1]: Reached target nss-lookup.target. May 14 00:36:24.751838 kernel: SCSI subsystem initialized May 14 00:36:24.750104 systemd[1]: Starting dracut-cmdline.service... May 14 00:36:24.757956 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 14 00:36:24.757988 kernel: device-mapper: uevent: version 1.0.3 May 14 00:36:24.758586 dracut-cmdline[309]: dracut-dracut-053 May 14 00:36:24.759270 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 14 00:36:24.760742 dracut-cmdline[309]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=412b3b42de04d7d5abb18ecf506be3ad2c72d6425f1b2391aa97d359e8bd9923 May 14 00:36:24.760881 systemd-modules-load[291]: Inserted module 'dm_multipath' May 14 00:36:24.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:24.766912 kernel: audit: type=1130 audit(1747182984.763:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:24.761583 systemd[1]: Finished systemd-modules-load.service. May 14 00:36:24.765161 systemd[1]: Starting systemd-sysctl.service... May 14 00:36:24.772760 systemd[1]: Finished systemd-sysctl.service. May 14 00:36:24.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:24.775898 kernel: audit: type=1130 audit(1747182984.773:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:24.819893 kernel: Loading iSCSI transport class v2.0-870. May 14 00:36:24.831897 kernel: iscsi: registered transport (tcp) May 14 00:36:24.845898 kernel: iscsi: registered transport (qla4xxx) May 14 00:36:24.845920 kernel: QLogic iSCSI HBA Driver May 14 00:36:24.877606 systemd[1]: Finished dracut-cmdline.service. May 14 00:36:24.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:24.878943 systemd[1]: Starting dracut-pre-udev.service... May 14 00:36:24.881430 kernel: audit: type=1130 audit(1747182984.877:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:24.923917 kernel: raid6: neonx8 gen() 13693 MB/s May 14 00:36:24.940888 kernel: raid6: neonx8 xor() 10751 MB/s May 14 00:36:24.957889 kernel: raid6: neonx4 gen() 13489 MB/s May 14 00:36:24.974895 kernel: raid6: neonx4 xor() 11229 MB/s May 14 00:36:24.991899 kernel: raid6: neonx2 gen() 12958 MB/s May 14 00:36:25.008897 kernel: raid6: neonx2 xor() 10401 MB/s May 14 00:36:25.025898 kernel: raid6: neonx1 gen() 10576 MB/s May 14 00:36:25.042897 kernel: raid6: neonx1 xor() 8795 MB/s May 14 00:36:25.059897 kernel: raid6: int64x8 gen() 6265 MB/s May 14 00:36:25.076897 kernel: raid6: int64x8 xor() 3543 MB/s May 14 00:36:25.093898 kernel: raid6: int64x4 gen() 7208 MB/s May 14 00:36:25.110897 kernel: raid6: int64x4 xor() 3854 MB/s May 14 00:36:25.127898 kernel: raid6: int64x2 gen() 6146 MB/s May 14 00:36:25.144897 kernel: raid6: int64x2 xor() 3320 MB/s May 14 00:36:25.161897 kernel: raid6: int64x1 gen() 5041 MB/s May 14 00:36:25.179091 kernel: raid6: int64x1 xor() 2645 MB/s May 14 00:36:25.179110 kernel: raid6: using algorithm neonx8 gen() 13693 MB/s May 14 00:36:25.179127 kernel: raid6: .... xor() 10751 MB/s, rmw enabled May 14 00:36:25.179142 kernel: raid6: using neon recovery algorithm May 14 00:36:25.190072 kernel: xor: measuring software checksum speed May 14 00:36:25.190095 kernel: 8regs : 17195 MB/sec May 14 00:36:25.190115 kernel: 32regs : 20143 MB/sec May 14 00:36:25.190980 kernel: arm64_neon : 27879 MB/sec May 14 00:36:25.190991 kernel: xor: using function: arm64_neon (27879 MB/sec) May 14 00:36:25.243894 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no May 14 00:36:25.253405 systemd[1]: Finished dracut-pre-udev.service. May 14 00:36:25.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:25.256000 audit: BPF prog-id=7 op=LOAD May 14 00:36:25.256000 audit: BPF prog-id=8 op=LOAD May 14 00:36:25.256778 systemd[1]: Starting systemd-udevd.service... May 14 00:36:25.258007 kernel: audit: type=1130 audit(1747182985.253:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:25.270587 systemd-udevd[490]: Using default interface naming scheme 'v252'. May 14 00:36:25.273791 systemd[1]: Started systemd-udevd.service. May 14 00:36:25.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:25.276847 systemd[1]: Starting dracut-pre-trigger.service... May 14 00:36:25.286884 dracut-pre-trigger[502]: rd.md=0: removing MD RAID activation May 14 00:36:25.311968 systemd[1]: Finished dracut-pre-trigger.service. May 14 00:36:25.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:25.313408 systemd[1]: Starting systemd-udev-trigger.service... May 14 00:36:25.345014 systemd[1]: Finished systemd-udev-trigger.service. May 14 00:36:25.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:25.368894 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 14 00:36:25.371835 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 14 00:36:25.371849 kernel: GPT:9289727 != 19775487 May 14 00:36:25.371858 kernel: GPT:Alternate GPT header not at the end of the disk. May 14 00:36:25.371867 kernel: GPT:9289727 != 19775487 May 14 00:36:25.371890 kernel: GPT: Use GNU Parted to correct GPT errors. May 14 00:36:25.371901 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 00:36:25.386144 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 14 00:36:25.387631 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (536) May 14 00:36:25.393052 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 14 00:36:25.393771 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 14 00:36:25.397489 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 14 00:36:25.400668 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 14 00:36:25.402001 systemd[1]: Starting disk-uuid.service... May 14 00:36:25.407622 disk-uuid[562]: Primary Header is updated. May 14 00:36:25.407622 disk-uuid[562]: Secondary Entries is updated. May 14 00:36:25.407622 disk-uuid[562]: Secondary Header is updated. May 14 00:36:25.412903 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 00:36:26.422582 disk-uuid[563]: The operation has completed successfully. May 14 00:36:26.423778 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 00:36:26.442650 systemd[1]: disk-uuid.service: Deactivated successfully. May 14 00:36:26.442744 systemd[1]: Finished disk-uuid.service. May 14 00:36:26.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:26.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:26.446477 systemd[1]: Starting verity-setup.service... May 14 00:36:26.462908 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 14 00:36:26.483837 systemd[1]: Found device dev-mapper-usr.device. May 14 00:36:26.485680 systemd[1]: Mounting sysusr-usr.mount... May 14 00:36:26.487688 systemd[1]: Finished verity-setup.service. May 14 00:36:26.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:26.534568 systemd[1]: Mounted sysusr-usr.mount. May 14 00:36:26.535603 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 14 00:36:26.535223 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 14 00:36:26.535793 systemd[1]: Starting ignition-setup.service... May 14 00:36:26.537729 systemd[1]: Starting parse-ip-for-networkd.service... May 14 00:36:26.543241 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 14 00:36:26.543276 kernel: BTRFS info (device vda6): using free space tree May 14 00:36:26.543286 kernel: BTRFS info (device vda6): has skinny extents May 14 00:36:26.550892 systemd[1]: mnt-oem.mount: Deactivated successfully. May 14 00:36:26.556396 systemd[1]: Finished ignition-setup.service. May 14 00:36:26.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:26.557624 systemd[1]: Starting ignition-fetch-offline.service... May 14 00:36:26.623588 systemd[1]: Finished parse-ip-for-networkd.service. May 14 00:36:26.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:26.624000 audit: BPF prog-id=9 op=LOAD May 14 00:36:26.625579 systemd[1]: Starting systemd-networkd.service... May 14 00:36:26.637380 ignition[644]: Ignition 2.14.0 May 14 00:36:26.637389 ignition[644]: Stage: fetch-offline May 14 00:36:26.637426 ignition[644]: no configs at "/usr/lib/ignition/base.d" May 14 00:36:26.637434 ignition[644]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:36:26.637557 ignition[644]: parsed url from cmdline: "" May 14 00:36:26.637560 ignition[644]: no config URL provided May 14 00:36:26.637565 ignition[644]: reading system config file "/usr/lib/ignition/user.ign" May 14 00:36:26.637571 ignition[644]: no config at "/usr/lib/ignition/user.ign" May 14 00:36:26.637589 ignition[644]: op(1): [started] loading QEMU firmware config module May 14 00:36:26.637593 ignition[644]: op(1): executing: "modprobe" "qemu_fw_cfg" May 14 00:36:26.647745 ignition[644]: op(1): [finished] loading QEMU firmware config module May 14 00:36:26.651476 systemd-networkd[739]: lo: Link UP May 14 00:36:26.651490 systemd-networkd[739]: lo: Gained carrier May 14 00:36:26.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:26.651838 systemd-networkd[739]: Enumeration completed May 14 00:36:26.652008 systemd-networkd[739]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 00:36:26.652335 systemd[1]: Started systemd-networkd.service. May 14 00:36:26.652972 systemd-networkd[739]: eth0: Link UP May 14 00:36:26.652975 systemd-networkd[739]: eth0: Gained carrier May 14 00:36:26.653333 systemd[1]: Reached target network.target. May 14 00:36:26.655215 systemd[1]: Starting iscsiuio.service... May 14 00:36:26.664681 systemd[1]: Started iscsiuio.service. May 14 00:36:26.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:26.666209 systemd[1]: Starting iscsid.service... May 14 00:36:26.669404 iscsid[745]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 14 00:36:26.669404 iscsid[745]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 14 00:36:26.669404 iscsid[745]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 14 00:36:26.669404 iscsid[745]: If using hardware iscsi like qla4xxx this message can be ignored. May 14 00:36:26.669404 iscsid[745]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 14 00:36:26.669404 iscsid[745]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 14 00:36:26.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:26.672129 systemd[1]: Started iscsid.service. May 14 00:36:26.677906 systemd[1]: Starting dracut-initqueue.service... May 14 00:36:26.682067 systemd-networkd[739]: eth0: DHCPv4 address 10.0.0.47/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 00:36:26.688576 systemd[1]: Finished dracut-initqueue.service. May 14 00:36:26.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:26.689402 systemd[1]: Reached target remote-fs-pre.target. May 14 00:36:26.690704 systemd[1]: Reached target remote-cryptsetup.target. May 14 00:36:26.692182 systemd[1]: Reached target remote-fs.target. May 14 00:36:26.694274 systemd[1]: Starting dracut-pre-mount.service... May 14 00:36:26.701718 systemd[1]: Finished dracut-pre-mount.service. May 14 00:36:26.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:26.712787 ignition[644]: parsing config with SHA512: 617f082f7395546c3240089a006a87571302ea738c8ea2e6fa0d766c11304f40df77f5be7ab0ec3cf623d1dfef640c7daaaedf3d57dbe2609175221c203a228e May 14 00:36:26.719275 unknown[644]: fetched base config from "system" May 14 00:36:26.719286 unknown[644]: fetched user config from "qemu" May 14 00:36:26.719744 ignition[644]: fetch-offline: fetch-offline passed May 14 00:36:26.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:26.720849 systemd[1]: Finished ignition-fetch-offline.service. May 14 00:36:26.719795 ignition[644]: Ignition finished successfully May 14 00:36:26.722263 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 14 00:36:26.722962 systemd[1]: Starting ignition-kargs.service... May 14 00:36:26.731178 ignition[760]: Ignition 2.14.0 May 14 00:36:26.731189 ignition[760]: Stage: kargs May 14 00:36:26.731280 ignition[760]: no configs at "/usr/lib/ignition/base.d" May 14 00:36:26.733363 systemd[1]: Finished ignition-kargs.service. May 14 00:36:26.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:26.731290 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:36:26.732198 ignition[760]: kargs: kargs passed May 14 00:36:26.735439 systemd[1]: Starting ignition-disks.service... May 14 00:36:26.732240 ignition[760]: Ignition finished successfully May 14 00:36:26.741678 ignition[766]: Ignition 2.14.0 May 14 00:36:26.741688 ignition[766]: Stage: disks May 14 00:36:26.741777 ignition[766]: no configs at "/usr/lib/ignition/base.d" May 14 00:36:26.741786 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:36:26.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:26.743757 systemd[1]: Finished ignition-disks.service. May 14 00:36:26.742966 ignition[766]: disks: disks passed May 14 00:36:26.744539 systemd[1]: Reached target initrd-root-device.target. May 14 00:36:26.743011 ignition[766]: Ignition finished successfully May 14 00:36:26.745980 systemd[1]: Reached target local-fs-pre.target. May 14 00:36:26.747089 systemd[1]: Reached target local-fs.target. May 14 00:36:26.748091 systemd[1]: Reached target sysinit.target. May 14 00:36:26.749328 systemd[1]: Reached target basic.target. May 14 00:36:26.751235 systemd[1]: Starting systemd-fsck-root.service... May 14 00:36:26.761764 systemd-fsck[774]: ROOT: clean, 619/553520 files, 56022/553472 blocks May 14 00:36:26.765021 systemd[1]: Finished systemd-fsck-root.service. May 14 00:36:26.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:26.766353 systemd[1]: Mounting sysroot.mount... May 14 00:36:26.771684 systemd[1]: Mounted sysroot.mount. May 14 00:36:26.772759 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 14 00:36:26.772281 systemd[1]: Reached target initrd-root-fs.target. May 14 00:36:26.774209 systemd[1]: Mounting sysroot-usr.mount... May 14 00:36:26.774932 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 14 00:36:26.774971 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 14 00:36:26.774996 systemd[1]: Reached target ignition-diskful.target. May 14 00:36:26.776804 systemd[1]: Mounted sysroot-usr.mount. May 14 00:36:26.778352 systemd[1]: Starting initrd-setup-root.service... May 14 00:36:26.782541 initrd-setup-root[784]: cut: /sysroot/etc/passwd: No such file or directory May 14 00:36:26.786129 initrd-setup-root[792]: cut: /sysroot/etc/group: No such file or directory May 14 00:36:26.789816 initrd-setup-root[800]: cut: /sysroot/etc/shadow: No such file or directory May 14 00:36:26.792919 initrd-setup-root[808]: cut: /sysroot/etc/gshadow: No such file or directory May 14 00:36:26.819531 systemd[1]: Finished initrd-setup-root.service. May 14 00:36:26.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:26.821141 systemd[1]: Starting ignition-mount.service... May 14 00:36:26.822382 systemd[1]: Starting sysroot-boot.service... May 14 00:36:26.827026 bash[825]: umount: /sysroot/usr/share/oem: not mounted. May 14 00:36:26.835989 ignition[827]: INFO : Ignition 2.14.0 May 14 00:36:26.835989 ignition[827]: INFO : Stage: mount May 14 00:36:26.837526 ignition[827]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 00:36:26.837526 ignition[827]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:36:26.837526 ignition[827]: INFO : mount: mount passed May 14 00:36:26.837526 ignition[827]: INFO : Ignition finished successfully May 14 00:36:26.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:26.838460 systemd[1]: Finished ignition-mount.service. May 14 00:36:26.847558 systemd[1]: Finished sysroot-boot.service. May 14 00:36:26.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:27.493771 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 14 00:36:27.500123 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (836) May 14 00:36:27.500152 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 14 00:36:27.500161 kernel: BTRFS info (device vda6): using free space tree May 14 00:36:27.501083 kernel: BTRFS info (device vda6): has skinny extents May 14 00:36:27.503763 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 14 00:36:27.505123 systemd[1]: Starting ignition-files.service... May 14 00:36:27.518222 ignition[856]: INFO : Ignition 2.14.0 May 14 00:36:27.518222 ignition[856]: INFO : Stage: files May 14 00:36:27.519980 ignition[856]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 00:36:27.519980 ignition[856]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:36:27.519980 ignition[856]: DEBUG : files: compiled without relabeling support, skipping May 14 00:36:27.523223 ignition[856]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 14 00:36:27.523223 ignition[856]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 14 00:36:27.525827 ignition[856]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 14 00:36:27.525827 ignition[856]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 14 00:36:27.525827 ignition[856]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 14 00:36:27.525698 unknown[856]: wrote ssh authorized keys file for user: core May 14 00:36:27.530744 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 14 00:36:27.530744 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 14 00:36:27.594039 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 14 00:36:27.803728 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 14 00:36:27.805795 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 14 00:36:27.805795 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 14 00:36:27.820088 systemd-networkd[739]: eth0: Gained IPv6LL May 14 00:36:28.243136 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 14 00:36:28.417860 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 14 00:36:28.419726 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 14 00:36:28.419726 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 14 00:36:28.419726 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 14 00:36:28.419726 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 14 00:36:28.419726 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 00:36:28.419726 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 00:36:28.419726 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 00:36:28.419726 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 00:36:28.419726 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 14 00:36:28.419726 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 14 00:36:28.419726 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 14 00:36:28.419726 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 14 00:36:28.419726 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 14 00:36:28.419726 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 14 00:36:28.633329 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 14 00:36:29.318716 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 14 00:36:29.318716 ignition[856]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 14 00:36:29.322628 ignition[856]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 00:36:29.322628 ignition[856]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 00:36:29.322628 ignition[856]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 14 00:36:29.322628 ignition[856]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 14 00:36:29.322628 ignition[856]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 00:36:29.322628 ignition[856]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 00:36:29.322628 ignition[856]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 14 00:36:29.322628 ignition[856]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" May 14 00:36:29.322628 ignition[856]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" May 14 00:36:29.322628 ignition[856]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" May 14 00:36:29.322628 ignition[856]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" May 14 00:36:29.353704 ignition[856]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 14 00:36:29.356035 ignition[856]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" May 14 00:36:29.356035 ignition[856]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 14 00:36:29.356035 ignition[856]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 14 00:36:29.356035 ignition[856]: INFO : files: files passed May 14 00:36:29.356035 ignition[856]: INFO : Ignition finished successfully May 14 00:36:29.366282 kernel: kauditd_printk_skb: 23 callbacks suppressed May 14 00:36:29.366308 kernel: audit: type=1130 audit(1747182989.357:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:29.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:29.356046 systemd[1]: Finished ignition-files.service. May 14 00:36:29.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:29.358951 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 14 00:36:29.374096 kernel: audit: type=1130 audit(1747182989.366:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:29.374113 kernel: audit: type=1130 audit(1747182989.369:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:29.374123 kernel: audit: type=1131 audit(1747182989.369:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:29.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:29.369000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:29.374182 initrd-setup-root-after-ignition[881]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory May 14 00:36:29.362616 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 14 00:36:29.378158 initrd-setup-root-after-ignition[883]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 00:36:29.363267 systemd[1]: Starting ignition-quench.service... May 14 00:36:29.366078 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 14 00:36:29.367135 systemd[1]: ignition-quench.service: Deactivated successfully. May 14 00:36:29.367200 systemd[1]: Finished ignition-quench.service. May 14 00:36:29.370056 systemd[1]: Reached target ignition-complete.target. May 14 00:36:29.375206 systemd[1]: Starting initrd-parse-etc.service... May 14 00:36:29.386855 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 14 00:36:29.386951 systemd[1]: Finished initrd-parse-etc.service. May 14 00:36:29.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:29.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:29.388468 systemd[1]: Reached target initrd-fs.target. May 14 00:36:29.393801 kernel: audit: type=1130 audit(1747182989.388:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:29.393818 kernel: audit: type=1131 audit(1747182989.388:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:29.393176 systemd[1]: Reached target initrd.target. May 14 00:36:29.394326 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 14 00:36:29.394988 systemd[1]: Starting dracut-pre-pivot.service... May 14 00:36:29.404838 systemd[1]: Finished dracut-pre-pivot.service. May 14 00:36:29.407947 kernel: audit: type=1130 audit(1747182989.405:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:29.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:29.406369 systemd[1]: Starting initrd-cleanup.service... May 14 00:36:29.414122 systemd[1]: Stopped target nss-lookup.target. May 14 00:36:29.415031 systemd[1]: Stopped target remote-cryptsetup.target. May 14 00:36:29.416349 systemd[1]: Stopped target timers.target. May 14 00:36:29.417516 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 14 00:36:29.418000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:29.420897 kernel: audit: type=1131 audit(1747182989.418:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:29.417619 systemd[1]: Stopped dracut-pre-pivot.service. May 14 00:36:29.418753 systemd[1]: Stopped target initrd.target. May 14 00:36:29.421618 systemd[1]: Stopped target basic.target. May 14 00:36:29.422702 systemd[1]: Stopped target ignition-complete.target. May 14 00:36:29.423856 systemd[1]: Stopped target ignition-diskful.target. May 14 00:36:29.425026 systemd[1]: Stopped target initrd-root-device.target. May 14 00:36:29.426291 systemd[1]: Stopped target remote-fs.target. May 14 00:36:29.427564 systemd[1]: Stopped target remote-fs-pre.target. May 14 00:36:29.428908 systemd[1]: Stopped target sysinit.target. May 14 00:36:29.429996 systemd[1]: Stopped target local-fs.target. May 14 00:36:29.431140 systemd[1]: Stopped target local-fs-pre.target. May 14 00:36:29.432269 systemd[1]: Stopped target swap.target. May 14 00:36:29.434000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:29.433312 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 14 00:36:29.437932 kernel: audit: type=1131 audit(1747182989.434:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:29.433413 systemd[1]: Stopped dracut-pre-mount.service. May 14 00:36:29.437000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:29.434573 systemd[1]: Stopped target cryptsetup.target. May 14 00:36:29.442125 kernel: audit: type=1131 audit(1747182989.437:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:29.441000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:29.437414 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 14 00:36:29.437511 systemd[1]: Stopped dracut-initqueue.service. May 14 00:36:29.438741 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 14 00:36:29.438833 systemd[1]: Stopped ignition-fetch-offline.service. May 14 00:36:29.441771 systemd[1]: Stopped target paths.target. May 14 00:36:29.442821 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 14 00:36:29.446902 systemd[1]: Stopped systemd-ask-password-console.path. May 14 00:36:29.448419 systemd[1]: Stopped target slices.target. May 14 00:36:29.449224 systemd[1]: Stopped target sockets.target. May 14 00:36:29.450343 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 14 00:36:29.451000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:29.450450 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 14 00:36:29.452000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:29.451671 systemd[1]: ignition-files.service: Deactivated successfully. May 14 00:36:29.451764 systemd[1]: Stopped ignition-files.service. May 14 00:36:29.455293 iscsid[745]: iscsid shutting down. May 14 00:36:29.453786 systemd[1]: Stopping ignition-mount.service... May 14 00:36:29.456000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:29.454979 systemd[1]: Stopping iscsid.service... May 14 00:36:29.455793 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 14 00:36:29.455931 systemd[1]: Stopped kmod-static-nodes.service. May 14 00:36:29.457813 systemd[1]: Stopping sysroot-boot.service... May 14 00:36:29.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:29.461718 ignition[897]: INFO : Ignition 2.14.0 May 14 00:36:29.461718 ignition[897]: INFO : Stage: umount May 14 00:36:29.461000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:29.458632 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 14 00:36:29.464304 ignition[897]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 00:36:29.464304 ignition[897]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:36:29.464304 ignition[897]: INFO : umount: umount passed May 14 00:36:29.464304 ignition[897]: INFO : Ignition finished successfully May 14 00:36:29.462000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:29.458765 systemd[1]: Stopped systemd-udev-trigger.service. May 14 00:36:29.460099 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 14 00:36:29.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:29.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:29.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:29.460181 systemd[1]: Stopped dracut-pre-trigger.service. May 14 00:36:29.462609 systemd[1]: iscsid.service: Deactivated successfully. May 14 00:36:29.472000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:29.462714 systemd[1]: Stopped iscsid.service. May 14 00:36:29.464001 systemd[1]: iscsid.socket: Deactivated successfully. May 14 00:36:29.464064 systemd[1]: Closed iscsid.socket. May 14 00:36:29.465087 systemd[1]: Stopping iscsiuio.service... May 14 00:36:29.478000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:29.468712 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 14 00:36:29.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:29.469426 systemd[1]: iscsiuio.service: Deactivated successfully. May 14 00:36:29.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:29.469510 systemd[1]: Stopped iscsiuio.service. May 14 00:36:29.470897 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 14 00:36:29.470973 systemd[1]: Finished initrd-cleanup.service. May 14 00:36:29.471957 systemd[1]: ignition-mount.service: Deactivated successfully. May 14 00:36:29.472035 systemd[1]: Stopped ignition-mount.service. May 14 00:36:29.474713 systemd[1]: Stopped target network.target. May 14 00:36:29.475958 systemd[1]: iscsiuio.socket: Deactivated successfully. May 14 00:36:29.475990 systemd[1]: Closed iscsiuio.socket. May 14 00:36:29.476973 systemd[1]: ignition-disks.service: Deactivated successfully. May 14 00:36:29.477019 systemd[1]: Stopped ignition-disks.service. May 14 00:36:29.479104 systemd[1]: ignition-kargs.service: Deactivated successfully. May 14 00:36:29.479143 systemd[1]: Stopped ignition-kargs.service. May 14 00:36:29.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:29.480704 systemd[1]: ignition-setup.service: Deactivated successfully. May 14 00:36:29.480743 systemd[1]: Stopped ignition-setup.service. May 14 00:36:29.482930 systemd[1]: Stopping systemd-networkd.service... May 14 00:36:29.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:29.484412 systemd[1]: Stopping systemd-resolved.service... May 14 00:36:29.491991 systemd[1]: systemd-resolved.service: Deactivated successfully. May 14 00:36:29.492087 systemd[1]: Stopped systemd-resolved.service. May 14 00:36:29.503000 audit: BPF prog-id=6 op=UNLOAD May 14 00:36:29.495089 systemd-networkd[739]: eth0: DHCPv6 lease lost May 14 00:36:29.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:29.496938 systemd[1]: systemd-networkd.service: Deactivated successfully. May 14 00:36:29.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:29.497034 systemd[1]: Stopped systemd-networkd.service. May 14 00:36:29.507000 audit: BPF prog-id=9 op=UNLOAD May 14 00:36:29.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:29.498997 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 14 00:36:29.499025 systemd[1]: Closed systemd-networkd.socket. May 14 00:36:29.501651 systemd[1]: Stopping network-cleanup.service... May 14 00:36:29.503986 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 14 00:36:29.504049 systemd[1]: Stopped parse-ip-for-networkd.service. May 14 00:36:29.505320 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 00:36:29.505358 systemd[1]: Stopped systemd-sysctl.service. May 14 00:36:29.516000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:29.507444 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 14 00:36:29.517000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:29.507489 systemd[1]: Stopped systemd-modules-load.service. May 14 00:36:29.518000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:29.508316 systemd[1]: Stopping systemd-udevd.service... May 14 00:36:29.512538 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 00:36:29.515654 systemd[1]: network-cleanup.service: Deactivated successfully. May 14 00:36:29.522000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:29.515757 systemd[1]: Stopped network-cleanup.service. May 14 00:36:29.523000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:29.517192 systemd[1]: sysroot-boot.service: Deactivated successfully. May 14 00:36:29.524000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:29.517287 systemd[1]: Stopped sysroot-boot.service. May 14 00:36:29.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:29.518318 systemd[1]: systemd-udevd.service: Deactivated successfully. May 14 00:36:29.518428 systemd[1]: Stopped systemd-udevd.service. May 14 00:36:29.519742 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 14 00:36:29.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:29.519773 systemd[1]: Closed systemd-udevd-control.socket. May 14 00:36:29.520721 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 14 00:36:29.520749 systemd[1]: Closed systemd-udevd-kernel.socket. May 14 00:36:29.521872 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 14 00:36:29.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:29.532000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:29.521931 systemd[1]: Stopped dracut-pre-udev.service. May 14 00:36:29.523403 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 14 00:36:29.523441 systemd[1]: Stopped dracut-cmdline.service. May 14 00:36:29.524657 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 00:36:29.524700 systemd[1]: Stopped dracut-cmdline-ask.service. May 14 00:36:29.525896 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 14 00:36:29.525935 systemd[1]: Stopped initrd-setup-root.service. May 14 00:36:29.527936 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 14 00:36:29.529166 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 00:36:29.529219 systemd[1]: Stopped systemd-vconsole-setup.service. May 14 00:36:29.532900 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 14 00:36:29.532978 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 14 00:36:29.533937 systemd[1]: Reached target initrd-switch-root.target. May 14 00:36:29.535834 systemd[1]: Starting initrd-switch-root.service... May 14 00:36:29.541864 systemd[1]: Switching root. May 14 00:36:29.559172 systemd-journald[290]: Journal stopped May 14 00:36:31.556643 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). May 14 00:36:31.556703 kernel: SELinux: Class mctp_socket not defined in policy. May 14 00:36:31.556716 kernel: SELinux: Class anon_inode not defined in policy. May 14 00:36:31.556727 kernel: SELinux: the above unknown classes and permissions will be allowed May 14 00:36:31.556737 kernel: SELinux: policy capability network_peer_controls=1 May 14 00:36:31.556747 kernel: SELinux: policy capability open_perms=1 May 14 00:36:31.556757 kernel: SELinux: policy capability extended_socket_class=1 May 14 00:36:31.556770 kernel: SELinux: policy capability always_check_network=0 May 14 00:36:31.556779 kernel: SELinux: policy capability cgroup_seclabel=1 May 14 00:36:31.556795 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 14 00:36:31.556805 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 14 00:36:31.556815 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 14 00:36:31.556827 systemd[1]: Successfully loaded SELinux policy in 31.582ms. May 14 00:36:31.556849 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.616ms. May 14 00:36:31.556861 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 14 00:36:31.556879 systemd[1]: Detected virtualization kvm. May 14 00:36:31.556891 systemd[1]: Detected architecture arm64. May 14 00:36:31.556904 systemd[1]: Detected first boot. May 14 00:36:31.556915 systemd[1]: Initializing machine ID from VM UUID. May 14 00:36:31.556928 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 14 00:36:31.556938 systemd[1]: Populated /etc with preset unit settings. May 14 00:36:31.556949 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 14 00:36:31.556962 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 14 00:36:31.556973 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:36:31.556992 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 14 00:36:31.557003 systemd[1]: Stopped initrd-switch-root.service. May 14 00:36:31.557014 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 14 00:36:31.557025 systemd[1]: Created slice system-addon\x2dconfig.slice. May 14 00:36:31.557036 systemd[1]: Created slice system-addon\x2drun.slice. May 14 00:36:31.557048 systemd[1]: Created slice system-getty.slice. May 14 00:36:31.557059 systemd[1]: Created slice system-modprobe.slice. May 14 00:36:31.557070 systemd[1]: Created slice system-serial\x2dgetty.slice. May 14 00:36:31.557081 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 14 00:36:31.557092 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 14 00:36:31.557107 systemd[1]: Created slice user.slice. May 14 00:36:31.557117 systemd[1]: Started systemd-ask-password-console.path. May 14 00:36:31.557128 systemd[1]: Started systemd-ask-password-wall.path. May 14 00:36:31.557139 systemd[1]: Set up automount boot.automount. May 14 00:36:31.557150 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 14 00:36:31.557161 systemd[1]: Stopped target initrd-switch-root.target. May 14 00:36:31.557172 systemd[1]: Stopped target initrd-fs.target. May 14 00:36:31.557182 systemd[1]: Stopped target initrd-root-fs.target. May 14 00:36:31.557193 systemd[1]: Reached target integritysetup.target. May 14 00:36:31.557203 systemd[1]: Reached target remote-cryptsetup.target. May 14 00:36:31.557214 systemd[1]: Reached target remote-fs.target. May 14 00:36:31.557225 systemd[1]: Reached target slices.target. May 14 00:36:31.557237 systemd[1]: Reached target swap.target. May 14 00:36:31.557248 systemd[1]: Reached target torcx.target. May 14 00:36:31.557258 systemd[1]: Reached target veritysetup.target. May 14 00:36:31.557269 systemd[1]: Listening on systemd-coredump.socket. May 14 00:36:31.557280 systemd[1]: Listening on systemd-initctl.socket. May 14 00:36:31.557290 systemd[1]: Listening on systemd-networkd.socket. May 14 00:36:31.557300 systemd[1]: Listening on systemd-udevd-control.socket. May 14 00:36:31.557311 systemd[1]: Listening on systemd-udevd-kernel.socket. May 14 00:36:31.557322 systemd[1]: Listening on systemd-userdbd.socket. May 14 00:36:31.557334 systemd[1]: Mounting dev-hugepages.mount... May 14 00:36:31.557345 systemd[1]: Mounting dev-mqueue.mount... May 14 00:36:31.557356 systemd[1]: Mounting media.mount... May 14 00:36:31.557367 systemd[1]: Mounting sys-kernel-debug.mount... May 14 00:36:31.557378 systemd[1]: Mounting sys-kernel-tracing.mount... May 14 00:36:31.557388 systemd[1]: Mounting tmp.mount... May 14 00:36:31.557399 systemd[1]: Starting flatcar-tmpfiles.service... May 14 00:36:31.557410 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 14 00:36:31.557421 systemd[1]: Starting kmod-static-nodes.service... May 14 00:36:31.557433 systemd[1]: Starting modprobe@configfs.service... May 14 00:36:31.557445 systemd[1]: Starting modprobe@dm_mod.service... May 14 00:36:31.557455 systemd[1]: Starting modprobe@drm.service... May 14 00:36:31.557465 systemd[1]: Starting modprobe@efi_pstore.service... May 14 00:36:31.557476 systemd[1]: Starting modprobe@fuse.service... May 14 00:36:31.557486 systemd[1]: Starting modprobe@loop.service... May 14 00:36:31.557497 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 14 00:36:31.557507 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 14 00:36:31.557518 systemd[1]: Stopped systemd-fsck-root.service. May 14 00:36:31.557530 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 14 00:36:31.557541 systemd[1]: Stopped systemd-fsck-usr.service. May 14 00:36:31.557551 kernel: fuse: init (API version 7.34) May 14 00:36:31.557561 systemd[1]: Stopped systemd-journald.service. May 14 00:36:31.557571 kernel: loop: module loaded May 14 00:36:31.557581 systemd[1]: Starting systemd-journald.service... May 14 00:36:31.557592 systemd[1]: Starting systemd-modules-load.service... May 14 00:36:31.557603 systemd[1]: Starting systemd-network-generator.service... May 14 00:36:31.557618 systemd[1]: Starting systemd-remount-fs.service... May 14 00:36:31.557630 systemd[1]: Starting systemd-udev-trigger.service... May 14 00:36:31.557642 systemd[1]: verity-setup.service: Deactivated successfully. May 14 00:36:31.557653 systemd[1]: Stopped verity-setup.service. May 14 00:36:31.557663 systemd[1]: Mounted dev-hugepages.mount. May 14 00:36:31.557674 systemd[1]: Mounted dev-mqueue.mount. May 14 00:36:31.557684 systemd[1]: Mounted media.mount. May 14 00:36:31.557695 systemd[1]: Mounted sys-kernel-debug.mount. May 14 00:36:31.557705 systemd[1]: Mounted sys-kernel-tracing.mount. May 14 00:36:31.557716 systemd[1]: Mounted tmp.mount. May 14 00:36:31.557726 systemd[1]: Finished kmod-static-nodes.service. May 14 00:36:31.557738 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 14 00:36:31.557749 systemd[1]: Finished modprobe@configfs.service. May 14 00:36:31.557760 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:36:31.557771 systemd[1]: Finished modprobe@dm_mod.service. May 14 00:36:31.557782 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 00:36:31.557793 systemd[1]: Finished modprobe@drm.service. May 14 00:36:31.557805 systemd-journald[993]: Journal started May 14 00:36:31.557853 systemd-journald[993]: Runtime Journal (/run/log/journal/4c7b588510c34383b6ff980dbfeaefe8) is 6.0M, max 48.7M, 42.6M free. May 14 00:36:29.626000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 May 14 00:36:29.731000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 14 00:36:29.731000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 14 00:36:29.731000 audit: BPF prog-id=10 op=LOAD May 14 00:36:29.731000 audit: BPF prog-id=10 op=UNLOAD May 14 00:36:29.731000 audit: BPF prog-id=11 op=LOAD May 14 00:36:29.731000 audit: BPF prog-id=11 op=UNLOAD May 14 00:36:29.770000 audit[930]: AVC avc: denied { associate } for pid=930 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 14 00:36:29.770000 audit[930]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001c58a2 a1=40000c8de0 a2=40000cf0c0 a3=32 items=0 ppid=913 pid=930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 14 00:36:29.770000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 14 00:36:29.771000 audit[930]: AVC avc: denied { associate } for pid=930 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 14 00:36:29.771000 audit[930]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001c5979 a2=1ed a3=0 items=2 ppid=913 pid=930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 14 00:36:29.771000 audit: CWD cwd="/" May 14 00:36:29.771000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 14 00:36:29.771000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 14 00:36:29.771000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 14 00:36:31.429000 audit: BPF prog-id=12 op=LOAD May 14 00:36:31.429000 audit: BPF prog-id=3 op=UNLOAD May 14 00:36:31.429000 audit: BPF prog-id=13 op=LOAD May 14 00:36:31.429000 audit: BPF prog-id=14 op=LOAD May 14 00:36:31.429000 audit: BPF prog-id=4 op=UNLOAD May 14 00:36:31.429000 audit: BPF prog-id=5 op=UNLOAD May 14 00:36:31.429000 audit: BPF prog-id=15 op=LOAD May 14 00:36:31.429000 audit: BPF prog-id=12 op=UNLOAD May 14 00:36:31.429000 audit: BPF prog-id=16 op=LOAD May 14 00:36:31.429000 audit: BPF prog-id=17 op=LOAD May 14 00:36:31.429000 audit: BPF prog-id=13 op=UNLOAD May 14 00:36:31.429000 audit: BPF prog-id=14 op=UNLOAD May 14 00:36:31.430000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:31.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:31.433000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:31.438000 audit: BPF prog-id=15 op=UNLOAD May 14 00:36:31.520000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:31.523000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:31.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:31.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:31.527000 audit: BPF prog-id=18 op=LOAD May 14 00:36:31.527000 audit: BPF prog-id=19 op=LOAD May 14 00:36:31.527000 audit: BPF prog-id=20 op=LOAD May 14 00:36:31.527000 audit: BPF prog-id=16 op=UNLOAD May 14 00:36:31.527000 audit: BPF prog-id=17 op=UNLOAD May 14 00:36:31.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:31.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:31.551000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 14 00:36:31.551000 audit[993]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=5 a1=fffffd5f3200 a2=4000 a3=1 items=0 ppid=1 pid=993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 14 00:36:31.551000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 14 00:36:31.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:31.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:31.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:31.555000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:31.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:31.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:29.770268 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:36:29Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 14 00:36:31.559107 systemd[1]: Started systemd-journald.service. May 14 00:36:31.428269 systemd[1]: Queued start job for default target multi-user.target. May 14 00:36:31.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:29.770545 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:36:29Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 14 00:36:31.428280 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 14 00:36:29.770563 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:36:29Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 14 00:36:31.431110 systemd[1]: systemd-journald.service: Deactivated successfully. May 14 00:36:29.770593 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:36:29Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" May 14 00:36:31.559443 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:36:29.770602 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:36:29Z" level=debug msg="skipped missing lower profile" missing profile=oem May 14 00:36:29.770641 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:36:29Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" May 14 00:36:31.559698 systemd[1]: Finished modprobe@efi_pstore.service. May 14 00:36:29.770654 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:36:29Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= May 14 00:36:29.770846 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:36:29Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack May 14 00:36:29.770896 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:36:29Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 14 00:36:29.770908 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:36:29Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 14 00:36:29.771313 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:36:29Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 May 14 00:36:29.771347 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:36:29Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl May 14 00:36:29.771365 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:36:29Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 May 14 00:36:31.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:31.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:29.771379 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:36:29Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store May 14 00:36:29.771395 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:36:29Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 May 14 00:36:29.771408 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:36:29Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store May 14 00:36:31.187259 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:36:31Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 14 00:36:31.187513 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:36:31Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 14 00:36:31.187609 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:36:31Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 14 00:36:31.560817 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 14 00:36:31.187788 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:36:31Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 14 00:36:31.560965 systemd[1]: Finished modprobe@fuse.service. May 14 00:36:31.187839 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:36:31Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= May 14 00:36:31.187927 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-14T00:36:31Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx May 14 00:36:31.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:31.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:31.561854 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:36:31.562112 systemd[1]: Finished modprobe@loop.service. May 14 00:36:31.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:31.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:31.562991 systemd[1]: Finished systemd-modules-load.service. May 14 00:36:31.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:31.563868 systemd[1]: Finished systemd-network-generator.service. May 14 00:36:31.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:31.564928 systemd[1]: Finished systemd-remount-fs.service. May 14 00:36:31.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:31.565985 systemd[1]: Reached target network-pre.target. May 14 00:36:31.567803 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 14 00:36:31.569521 systemd[1]: Mounting sys-kernel-config.mount... May 14 00:36:31.570094 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 14 00:36:31.571516 systemd[1]: Starting systemd-hwdb-update.service... May 14 00:36:31.573299 systemd[1]: Starting systemd-journal-flush.service... May 14 00:36:31.573999 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 00:36:31.575088 systemd[1]: Starting systemd-random-seed.service... May 14 00:36:31.575772 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 14 00:36:31.576832 systemd[1]: Starting systemd-sysctl.service... May 14 00:36:31.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:31.580447 systemd[1]: Finished flatcar-tmpfiles.service. May 14 00:36:31.581357 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 14 00:36:31.582239 systemd[1]: Mounted sys-kernel-config.mount. May 14 00:36:31.583922 systemd[1]: Starting systemd-sysusers.service... May 14 00:36:31.584788 systemd-journald[993]: Time spent on flushing to /var/log/journal/4c7b588510c34383b6ff980dbfeaefe8 is 16.196ms for 1000 entries. May 14 00:36:31.584788 systemd-journald[993]: System Journal (/var/log/journal/4c7b588510c34383b6ff980dbfeaefe8) is 8.0M, max 195.6M, 187.6M free. May 14 00:36:31.613094 systemd-journald[993]: Received client request to flush runtime journal. May 14 00:36:31.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:31.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:31.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:31.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:31.588162 systemd[1]: Finished systemd-udev-trigger.service. May 14 00:36:31.614219 udevadm[1033]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 14 00:36:31.589133 systemd[1]: Finished systemd-random-seed.service. May 14 00:36:31.589933 systemd[1]: Reached target first-boot-complete.target. May 14 00:36:31.591661 systemd[1]: Starting systemd-udev-settle.service... May 14 00:36:31.601357 systemd[1]: Finished systemd-sysctl.service. May 14 00:36:31.609815 systemd[1]: Finished systemd-sysusers.service. May 14 00:36:31.614007 systemd[1]: Finished systemd-journal-flush.service. May 14 00:36:31.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:31.929368 systemd[1]: Finished systemd-hwdb-update.service. May 14 00:36:31.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:31.929000 audit: BPF prog-id=21 op=LOAD May 14 00:36:31.929000 audit: BPF prog-id=22 op=LOAD May 14 00:36:31.929000 audit: BPF prog-id=7 op=UNLOAD May 14 00:36:31.929000 audit: BPF prog-id=8 op=UNLOAD May 14 00:36:31.931382 systemd[1]: Starting systemd-udevd.service... May 14 00:36:31.949933 systemd-udevd[1035]: Using default interface naming scheme 'v252'. May 14 00:36:31.960909 systemd[1]: Started systemd-udevd.service. May 14 00:36:31.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:31.966000 audit: BPF prog-id=23 op=LOAD May 14 00:36:31.968353 systemd[1]: Starting systemd-networkd.service... May 14 00:36:31.980000 audit: BPF prog-id=24 op=LOAD May 14 00:36:31.980000 audit: BPF prog-id=25 op=LOAD May 14 00:36:31.980000 audit: BPF prog-id=26 op=LOAD May 14 00:36:31.982012 systemd[1]: Starting systemd-userdbd.service... May 14 00:36:31.986216 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. May 14 00:36:32.011970 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 14 00:36:32.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:32.018204 systemd[1]: Started systemd-userdbd.service. May 14 00:36:32.080337 systemd-networkd[1055]: lo: Link UP May 14 00:36:32.080348 systemd-networkd[1055]: lo: Gained carrier May 14 00:36:32.080702 systemd-networkd[1055]: Enumeration completed May 14 00:36:32.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:32.080790 systemd[1]: Started systemd-networkd.service. May 14 00:36:32.080806 systemd-networkd[1055]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 00:36:32.084540 systemd-networkd[1055]: eth0: Link UP May 14 00:36:32.084551 systemd-networkd[1055]: eth0: Gained carrier May 14 00:36:32.088216 systemd[1]: Finished systemd-udev-settle.service. May 14 00:36:32.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:32.090059 systemd[1]: Starting lvm2-activation-early.service... May 14 00:36:32.099881 lvm[1068]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 00:36:32.107014 systemd-networkd[1055]: eth0: DHCPv4 address 10.0.0.47/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 00:36:32.128717 systemd[1]: Finished lvm2-activation-early.service. May 14 00:36:32.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:32.129572 systemd[1]: Reached target cryptsetup.target. May 14 00:36:32.131363 systemd[1]: Starting lvm2-activation.service... May 14 00:36:32.134866 lvm[1069]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 00:36:32.166745 systemd[1]: Finished lvm2-activation.service. May 14 00:36:32.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:32.167517 systemd[1]: Reached target local-fs-pre.target. May 14 00:36:32.168194 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 14 00:36:32.168223 systemd[1]: Reached target local-fs.target. May 14 00:36:32.168788 systemd[1]: Reached target machines.target. May 14 00:36:32.170549 systemd[1]: Starting ldconfig.service... May 14 00:36:32.171484 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 14 00:36:32.171561 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 14 00:36:32.172745 systemd[1]: Starting systemd-boot-update.service... May 14 00:36:32.174531 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 14 00:36:32.176523 systemd[1]: Starting systemd-machine-id-commit.service... May 14 00:36:32.178370 systemd[1]: Starting systemd-sysext.service... May 14 00:36:32.179718 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1071 (bootctl) May 14 00:36:32.181247 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 14 00:36:32.187076 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 14 00:36:32.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:32.193042 systemd[1]: Unmounting usr-share-oem.mount... May 14 00:36:32.198272 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 14 00:36:32.198482 systemd[1]: Unmounted usr-share-oem.mount. May 14 00:36:32.208897 kernel: loop0: detected capacity change from 0 to 194096 May 14 00:36:32.243764 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 14 00:36:32.244398 systemd[1]: Finished systemd-machine-id-commit.service. May 14 00:36:32.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:32.252896 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 14 00:36:32.262910 systemd-fsck[1081]: fsck.fat 4.2 (2021-01-31) May 14 00:36:32.262910 systemd-fsck[1081]: /dev/vda1: 236 files, 117310/258078 clusters May 14 00:36:32.265259 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 14 00:36:32.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:32.267494 systemd[1]: Mounting boot.mount... May 14 00:36:32.272888 kernel: loop1: detected capacity change from 0 to 194096 May 14 00:36:32.276987 systemd[1]: Mounted boot.mount. May 14 00:36:32.281045 (sd-sysext)[1086]: Using extensions 'kubernetes'. May 14 00:36:32.281384 (sd-sysext)[1086]: Merged extensions into '/usr'. May 14 00:36:32.301472 systemd[1]: Finished systemd-boot-update.service. May 14 00:36:32.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:32.302909 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 14 00:36:32.304808 systemd[1]: Starting modprobe@dm_mod.service... May 14 00:36:32.307250 systemd[1]: Starting modprobe@efi_pstore.service... May 14 00:36:32.310451 systemd[1]: Starting modprobe@loop.service... May 14 00:36:32.311186 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 14 00:36:32.311501 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 14 00:36:32.312737 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:36:32.312863 systemd[1]: Finished modprobe@dm_mod.service. May 14 00:36:32.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:32.312000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:32.314003 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:36:32.314140 systemd[1]: Finished modprobe@efi_pstore.service. May 14 00:36:32.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:32.314000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:32.315271 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:36:32.315412 systemd[1]: Finished modprobe@loop.service. May 14 00:36:32.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:32.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:32.316551 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 00:36:32.316666 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 14 00:36:32.362103 ldconfig[1070]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 14 00:36:32.365342 systemd[1]: Finished ldconfig.service. May 14 00:36:32.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:32.545006 systemd[1]: Mounting usr-share-oem.mount... May 14 00:36:32.549740 systemd[1]: Mounted usr-share-oem.mount. May 14 00:36:32.551345 systemd[1]: Finished systemd-sysext.service. May 14 00:36:32.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:32.553026 systemd[1]: Starting ensure-sysext.service... May 14 00:36:32.554488 systemd[1]: Starting systemd-tmpfiles-setup.service... May 14 00:36:32.558654 systemd[1]: Reloading. May 14 00:36:32.568709 systemd-tmpfiles[1093]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 14 00:36:32.570386 systemd-tmpfiles[1093]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 14 00:36:32.572836 systemd-tmpfiles[1093]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 14 00:36:32.597331 /usr/lib/systemd/system-generators/torcx-generator[1113]: time="2025-05-14T00:36:32Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 14 00:36:32.597660 /usr/lib/systemd/system-generators/torcx-generator[1113]: time="2025-05-14T00:36:32Z" level=info msg="torcx already run" May 14 00:36:32.653512 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 14 00:36:32.653529 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 14 00:36:32.668703 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:36:32.710000 audit: BPF prog-id=27 op=LOAD May 14 00:36:32.710000 audit: BPF prog-id=18 op=UNLOAD May 14 00:36:32.710000 audit: BPF prog-id=28 op=LOAD May 14 00:36:32.710000 audit: BPF prog-id=29 op=LOAD May 14 00:36:32.710000 audit: BPF prog-id=19 op=UNLOAD May 14 00:36:32.710000 audit: BPF prog-id=20 op=UNLOAD May 14 00:36:32.711000 audit: BPF prog-id=30 op=LOAD May 14 00:36:32.711000 audit: BPF prog-id=23 op=UNLOAD May 14 00:36:32.712000 audit: BPF prog-id=31 op=LOAD May 14 00:36:32.712000 audit: BPF prog-id=32 op=LOAD May 14 00:36:32.712000 audit: BPF prog-id=21 op=UNLOAD May 14 00:36:32.712000 audit: BPF prog-id=22 op=UNLOAD May 14 00:36:32.712000 audit: BPF prog-id=33 op=LOAD May 14 00:36:32.712000 audit: BPF prog-id=24 op=UNLOAD May 14 00:36:32.712000 audit: BPF prog-id=34 op=LOAD May 14 00:36:32.712000 audit: BPF prog-id=35 op=LOAD May 14 00:36:32.712000 audit: BPF prog-id=25 op=UNLOAD May 14 00:36:32.712000 audit: BPF prog-id=26 op=UNLOAD May 14 00:36:32.715279 systemd[1]: Finished systemd-tmpfiles-setup.service. May 14 00:36:32.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:32.719179 systemd[1]: Starting audit-rules.service... May 14 00:36:32.720820 systemd[1]: Starting clean-ca-certificates.service... May 14 00:36:32.723160 systemd[1]: Starting systemd-journal-catalog-update.service... May 14 00:36:32.726000 audit: BPF prog-id=36 op=LOAD May 14 00:36:32.727993 systemd[1]: Starting systemd-resolved.service... May 14 00:36:32.730000 audit: BPF prog-id=37 op=LOAD May 14 00:36:32.732367 systemd[1]: Starting systemd-timesyncd.service... May 14 00:36:32.734209 systemd[1]: Starting systemd-update-utmp.service... May 14 00:36:32.737188 systemd[1]: Finished clean-ca-certificates.service. May 14 00:36:32.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:32.738305 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 00:36:32.737000 audit[1161]: SYSTEM_BOOT pid=1161 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 14 00:36:32.741571 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 14 00:36:32.742855 systemd[1]: Starting modprobe@dm_mod.service... May 14 00:36:32.745354 systemd[1]: Starting modprobe@efi_pstore.service... May 14 00:36:32.747128 systemd[1]: Starting modprobe@loop.service... May 14 00:36:32.747778 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 14 00:36:32.747992 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 14 00:36:32.748128 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 00:36:32.749251 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:36:32.749382 systemd[1]: Finished modprobe@dm_mod.service. May 14 00:36:32.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:32.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:32.750588 systemd[1]: Finished systemd-journal-catalog-update.service. May 14 00:36:32.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:32.751820 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:36:32.751945 systemd[1]: Finished modprobe@efi_pstore.service. May 14 00:36:32.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:32.751000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:32.753090 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:36:32.753197 systemd[1]: Finished modprobe@loop.service. May 14 00:36:32.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:32.753000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:32.754401 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 00:36:32.754531 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 14 00:36:32.755773 systemd[1]: Starting systemd-update-done.service... May 14 00:36:32.757152 systemd[1]: Finished systemd-update-utmp.service. May 14 00:36:32.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:32.759947 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 14 00:36:32.761120 systemd[1]: Starting modprobe@dm_mod.service... May 14 00:36:32.762979 systemd[1]: Starting modprobe@efi_pstore.service... May 14 00:36:32.764696 systemd[1]: Starting modprobe@loop.service... May 14 00:36:32.765365 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 14 00:36:32.765480 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 14 00:36:32.765569 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 00:36:32.766326 systemd[1]: Finished systemd-update-done.service. May 14 00:36:32.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:32.767488 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:36:32.767615 systemd[1]: Finished modprobe@dm_mod.service. May 14 00:36:32.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:32.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:32.768772 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:36:32.768894 systemd[1]: Finished modprobe@efi_pstore.service. May 14 00:36:32.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:32.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:32.769969 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:36:32.770078 systemd[1]: Finished modprobe@loop.service. May 14 00:36:32.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:32.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:32.771237 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 00:36:32.771345 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 14 00:36:32.773694 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 14 00:36:32.775037 systemd[1]: Starting modprobe@dm_mod.service... May 14 00:36:32.776836 systemd[1]: Starting modprobe@drm.service... May 14 00:36:32.779076 systemd[1]: Starting modprobe@efi_pstore.service... May 14 00:36:32.781419 systemd[1]: Starting modprobe@loop.service... May 14 00:36:32.782148 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 14 00:36:32.782332 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 14 00:36:32.783647 systemd[1]: Starting systemd-networkd-wait-online.service... May 14 00:36:32.784546 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 00:36:32.785760 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:36:32.785902 systemd[1]: Finished modprobe@dm_mod.service. May 14 00:36:32.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:32.785000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:32.787100 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 00:36:32.787210 systemd[1]: Finished modprobe@drm.service. May 14 00:36:32.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:32.787000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:32.788268 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:36:32.788396 systemd[1]: Finished modprobe@efi_pstore.service. May 14 00:36:32.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:32.788000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:32.789558 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:36:32.789680 systemd[1]: Finished modprobe@loop.service. May 14 00:36:32.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:32.790000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:32.791083 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 00:36:32.791176 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 14 00:36:32.792227 systemd[1]: Finished ensure-sysext.service. May 14 00:36:32.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:36:32.794000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 14 00:36:32.794000 audit[1184]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc82c12b0 a2=420 a3=0 items=0 ppid=1152 pid=1184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 14 00:36:32.794000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 14 00:36:32.795829 augenrules[1184]: No rules May 14 00:36:32.796258 systemd-resolved[1156]: Positive Trust Anchors: May 14 00:36:32.796265 systemd-resolved[1156]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 00:36:32.796291 systemd-resolved[1156]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 14 00:36:32.796398 systemd[1]: Finished audit-rules.service. May 14 00:36:32.802249 systemd[1]: Started systemd-timesyncd.service. May 14 00:36:32.803148 systemd-timesyncd[1160]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 14 00:36:32.803282 systemd[1]: Reached target time-set.target. May 14 00:36:32.803468 systemd-timesyncd[1160]: Initial clock synchronization to Wed 2025-05-14 00:36:32.714051 UTC. May 14 00:36:32.815374 systemd-resolved[1156]: Defaulting to hostname 'linux'. May 14 00:36:32.818572 systemd[1]: Started systemd-resolved.service. May 14 00:36:32.819301 systemd[1]: Reached target network.target. May 14 00:36:32.819928 systemd[1]: Reached target nss-lookup.target. May 14 00:36:32.820544 systemd[1]: Reached target sysinit.target. May 14 00:36:32.821215 systemd[1]: Started motdgen.path. May 14 00:36:32.821800 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 14 00:36:32.822902 systemd[1]: Started logrotate.timer. May 14 00:36:32.823569 systemd[1]: Started mdadm.timer. May 14 00:36:32.824145 systemd[1]: Started systemd-tmpfiles-clean.timer. May 14 00:36:32.824774 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 14 00:36:32.824803 systemd[1]: Reached target paths.target. May 14 00:36:32.825400 systemd[1]: Reached target timers.target. May 14 00:36:32.826301 systemd[1]: Listening on dbus.socket. May 14 00:36:32.827942 systemd[1]: Starting docker.socket... May 14 00:36:32.831124 systemd[1]: Listening on sshd.socket. May 14 00:36:32.831834 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 14 00:36:32.832265 systemd[1]: Listening on docker.socket. May 14 00:36:32.832980 systemd[1]: Reached target sockets.target. May 14 00:36:32.833572 systemd[1]: Reached target basic.target. May 14 00:36:32.834221 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 14 00:36:32.834254 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 14 00:36:32.835229 systemd[1]: Starting containerd.service... May 14 00:36:32.836773 systemd[1]: Starting dbus.service... May 14 00:36:32.838529 systemd[1]: Starting enable-oem-cloudinit.service... May 14 00:36:32.840463 systemd[1]: Starting extend-filesystems.service... May 14 00:36:32.841284 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 14 00:36:32.842483 systemd[1]: Starting motdgen.service... May 14 00:36:32.844181 systemd[1]: Starting prepare-helm.service... May 14 00:36:32.846525 systemd[1]: Starting ssh-key-proc-cmdline.service... May 14 00:36:32.847102 jq[1194]: false May 14 00:36:32.848466 systemd[1]: Starting sshd-keygen.service... May 14 00:36:32.851219 systemd[1]: Starting systemd-logind.service... May 14 00:36:32.851993 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 14 00:36:32.852078 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 14 00:36:32.852449 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 14 00:36:32.853125 systemd[1]: Starting update-engine.service... May 14 00:36:32.855100 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 14 00:36:32.858171 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 14 00:36:32.858357 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 14 00:36:32.858790 jq[1211]: true May 14 00:36:32.859404 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 14 00:36:32.859564 systemd[1]: Finished ssh-key-proc-cmdline.service. May 14 00:36:32.872524 jq[1215]: true May 14 00:36:32.874696 systemd[1]: motdgen.service: Deactivated successfully. May 14 00:36:32.874860 systemd[1]: Finished motdgen.service. May 14 00:36:32.876619 extend-filesystems[1195]: Found loop1 May 14 00:36:32.876619 extend-filesystems[1195]: Found vda May 14 00:36:32.876619 extend-filesystems[1195]: Found vda1 May 14 00:36:32.878316 extend-filesystems[1195]: Found vda2 May 14 00:36:32.878316 extend-filesystems[1195]: Found vda3 May 14 00:36:32.878316 extend-filesystems[1195]: Found usr May 14 00:36:32.878316 extend-filesystems[1195]: Found vda4 May 14 00:36:32.878316 extend-filesystems[1195]: Found vda6 May 14 00:36:32.878316 extend-filesystems[1195]: Found vda7 May 14 00:36:32.878316 extend-filesystems[1195]: Found vda9 May 14 00:36:32.878316 extend-filesystems[1195]: Checking size of /dev/vda9 May 14 00:36:32.883748 systemd[1]: Started dbus.service. May 14 00:36:32.883507 dbus-daemon[1193]: [system] SELinux support is enabled May 14 00:36:32.886192 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 14 00:36:32.886213 systemd[1]: Reached target system-config.target. May 14 00:36:32.889278 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 14 00:36:32.889300 systemd[1]: Reached target user-config.target. May 14 00:36:32.910303 extend-filesystems[1195]: Resized partition /dev/vda9 May 14 00:36:32.919409 tar[1213]: linux-arm64/helm May 14 00:36:32.919616 extend-filesystems[1234]: resize2fs 1.46.5 (30-Dec-2021) May 14 00:36:32.928918 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 14 00:36:32.934414 systemd-logind[1203]: Watching system buttons on /dev/input/event0 (Power Button) May 14 00:36:32.935256 systemd-logind[1203]: New seat seat0. May 14 00:36:32.940143 systemd[1]: Started systemd-logind.service. May 14 00:36:32.949742 update_engine[1207]: I0514 00:36:32.949430 1207 main.cc:92] Flatcar Update Engine starting May 14 00:36:32.953723 systemd[1]: Started update-engine.service. May 14 00:36:32.957099 update_engine[1207]: I0514 00:36:32.953745 1207 update_check_scheduler.cc:74] Next update check in 10m34s May 14 00:36:32.956201 systemd[1]: Started locksmithd.service. May 14 00:36:32.959900 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 14 00:36:32.974937 extend-filesystems[1234]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 14 00:36:32.974937 extend-filesystems[1234]: old_desc_blocks = 1, new_desc_blocks = 1 May 14 00:36:32.974937 extend-filesystems[1234]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 14 00:36:32.981404 extend-filesystems[1195]: Resized filesystem in /dev/vda9 May 14 00:36:32.976679 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 14 00:36:32.982110 bash[1244]: Updated "/home/core/.ssh/authorized_keys" May 14 00:36:32.979074 systemd[1]: extend-filesystems.service: Deactivated successfully. May 14 00:36:32.979219 systemd[1]: Finished extend-filesystems.service. May 14 00:36:32.997694 env[1216]: time="2025-05-14T00:36:32.997640160Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 14 00:36:33.011340 locksmithd[1245]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 14 00:36:33.020273 env[1216]: time="2025-05-14T00:36:33.020153975Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 14 00:36:33.020340 env[1216]: time="2025-05-14T00:36:33.020281302Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 14 00:36:33.021426 env[1216]: time="2025-05-14T00:36:33.021387772Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.181-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 14 00:36:33.021426 env[1216]: time="2025-05-14T00:36:33.021420998Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 14 00:36:33.021636 env[1216]: time="2025-05-14T00:36:33.021606550Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 14 00:36:33.021636 env[1216]: time="2025-05-14T00:36:33.021627988Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 14 00:36:33.021690 env[1216]: time="2025-05-14T00:36:33.021641437Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 14 00:36:33.021690 env[1216]: time="2025-05-14T00:36:33.021651009Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 14 00:36:33.021735 env[1216]: time="2025-05-14T00:36:33.021719756Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 14 00:36:33.022002 env[1216]: time="2025-05-14T00:36:33.021983903Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 14 00:36:33.022144 env[1216]: time="2025-05-14T00:36:33.022126182Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 14 00:36:33.022173 env[1216]: time="2025-05-14T00:36:33.022143547Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 14 00:36:33.022208 env[1216]: time="2025-05-14T00:36:33.022194058Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 14 00:36:33.022247 env[1216]: time="2025-05-14T00:36:33.022208100Z" level=info msg="metadata content store policy set" policy=shared May 14 00:36:33.025050 env[1216]: time="2025-05-14T00:36:33.025015114Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 14 00:36:33.025050 env[1216]: time="2025-05-14T00:36:33.025046995Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 14 00:36:33.025163 env[1216]: time="2025-05-14T00:36:33.025059099Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 14 00:36:33.025163 env[1216]: time="2025-05-14T00:36:33.025088646Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 14 00:36:33.025163 env[1216]: time="2025-05-14T00:36:33.025103163Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 14 00:36:33.025163 env[1216]: time="2025-05-14T00:36:33.025115978Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 14 00:36:33.025163 env[1216]: time="2025-05-14T00:36:33.025127093Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 14 00:36:33.025473 env[1216]: time="2025-05-14T00:36:33.025448081Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 14 00:36:33.025473 env[1216]: time="2025-05-14T00:36:33.025473041Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 14 00:36:33.025533 env[1216]: time="2025-05-14T00:36:33.025486450Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 14 00:36:33.025533 env[1216]: time="2025-05-14T00:36:33.025499186Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 14 00:36:33.025533 env[1216]: time="2025-05-14T00:36:33.025511092Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 14 00:36:33.025628 env[1216]: time="2025-05-14T00:36:33.025611483Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 14 00:36:33.025703 env[1216]: time="2025-05-14T00:36:33.025690395Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 14 00:36:33.025945 env[1216]: time="2025-05-14T00:36:33.025929820Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 14 00:36:33.025994 env[1216]: time="2025-05-14T00:36:33.025957074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 14 00:36:33.025994 env[1216]: time="2025-05-14T00:36:33.025972579Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 14 00:36:33.026084 env[1216]: time="2025-05-14T00:36:33.026073879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 14 00:36:33.026113 env[1216]: time="2025-05-14T00:36:33.026088040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 14 00:36:33.026113 env[1216]: time="2025-05-14T00:36:33.026099748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 14 00:36:33.026113 env[1216]: time="2025-05-14T00:36:33.026110665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 14 00:36:33.026182 env[1216]: time="2025-05-14T00:36:33.026121978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 14 00:36:33.026182 env[1216]: time="2025-05-14T00:36:33.026132856Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 14 00:36:33.026182 env[1216]: time="2025-05-14T00:36:33.026146779Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 14 00:36:33.026182 env[1216]: time="2025-05-14T00:36:33.026158210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 14 00:36:33.026182 env[1216]: time="2025-05-14T00:36:33.026169760Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 14 00:36:33.026353 env[1216]: time="2025-05-14T00:36:33.026288306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 14 00:36:33.026353 env[1216]: time="2025-05-14T00:36:33.026308875Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 14 00:36:33.026353 env[1216]: time="2025-05-14T00:36:33.026329641Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 14 00:36:33.026353 env[1216]: time="2025-05-14T00:36:33.026343248Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 14 00:36:33.026443 env[1216]: time="2025-05-14T00:36:33.026355550Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 14 00:36:33.026443 env[1216]: time="2025-05-14T00:36:33.026366150Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 14 00:36:33.026443 env[1216]: time="2025-05-14T00:36:33.026381458Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 14 00:36:33.026443 env[1216]: time="2025-05-14T00:36:33.026412509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 14 00:36:33.026644 env[1216]: time="2025-05-14T00:36:33.026586273Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 14 00:36:33.026644 env[1216]: time="2025-05-14T00:36:33.026641729Z" level=info msg="Connect containerd service" May 14 00:36:33.030218 env[1216]: time="2025-05-14T00:36:33.026669339Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 14 00:36:33.030218 env[1216]: time="2025-05-14T00:36:33.027399641Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 00:36:33.030218 env[1216]: time="2025-05-14T00:36:33.027718018Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 14 00:36:33.030218 env[1216]: time="2025-05-14T00:36:33.027751917Z" level=info msg=serving... address=/run/containerd/containerd.sock May 14 00:36:33.030218 env[1216]: time="2025-05-14T00:36:33.027787714Z" level=info msg="containerd successfully booted in 0.035145s" May 14 00:36:33.029075 systemd[1]: Started containerd.service. May 14 00:36:33.031414 env[1216]: time="2025-05-14T00:36:33.030584996Z" level=info msg="Start subscribing containerd event" May 14 00:36:33.031414 env[1216]: time="2025-05-14T00:36:33.030646069Z" level=info msg="Start recovering state" May 14 00:36:33.031414 env[1216]: time="2025-05-14T00:36:33.030708487Z" level=info msg="Start event monitor" May 14 00:36:33.031414 env[1216]: time="2025-05-14T00:36:33.030730835Z" level=info msg="Start snapshots syncer" May 14 00:36:33.031414 env[1216]: time="2025-05-14T00:36:33.030740922Z" level=info msg="Start cni network conf syncer for default" May 14 00:36:33.031414 env[1216]: time="2025-05-14T00:36:33.030748833Z" level=info msg="Start streaming server" May 14 00:36:33.277082 tar[1213]: linux-arm64/LICENSE May 14 00:36:33.277186 tar[1213]: linux-arm64/README.md May 14 00:36:33.281491 systemd[1]: Finished prepare-helm.service. May 14 00:36:33.899982 systemd-networkd[1055]: eth0: Gained IPv6LL May 14 00:36:33.901656 systemd[1]: Finished systemd-networkd-wait-online.service. May 14 00:36:33.902695 systemd[1]: Reached target network-online.target. May 14 00:36:33.904833 systemd[1]: Starting kubelet.service... May 14 00:36:34.402687 systemd[1]: Started kubelet.service. May 14 00:36:34.895472 kubelet[1262]: E0514 00:36:34.895365 1262 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:36:34.897940 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:36:34.898055 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:36:35.421289 sshd_keygen[1217]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 14 00:36:35.440452 systemd[1]: Finished sshd-keygen.service. May 14 00:36:35.442375 systemd[1]: Starting issuegen.service... May 14 00:36:35.447112 systemd[1]: issuegen.service: Deactivated successfully. May 14 00:36:35.447269 systemd[1]: Finished issuegen.service. May 14 00:36:35.449190 systemd[1]: Starting systemd-user-sessions.service... May 14 00:36:35.454751 systemd[1]: Finished systemd-user-sessions.service. May 14 00:36:35.456695 systemd[1]: Started getty@tty1.service. May 14 00:36:35.458436 systemd[1]: Started serial-getty@ttyAMA0.service. May 14 00:36:35.459315 systemd[1]: Reached target getty.target. May 14 00:36:35.459969 systemd[1]: Reached target multi-user.target. May 14 00:36:35.461607 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 14 00:36:35.467446 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 14 00:36:35.467599 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 14 00:36:35.468431 systemd[1]: Startup finished in 556ms (kernel) + 5.022s (initrd) + 5.877s (userspace) = 11.456s. May 14 00:36:37.319234 systemd[1]: Created slice system-sshd.slice. May 14 00:36:37.320303 systemd[1]: Started sshd@0-10.0.0.47:22-10.0.0.1:39862.service. May 14 00:36:37.363589 sshd[1285]: Accepted publickey for core from 10.0.0.1 port 39862 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:36:37.365632 sshd[1285]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:36:37.375209 systemd[1]: Created slice user-500.slice. May 14 00:36:37.376285 systemd[1]: Starting user-runtime-dir@500.service... May 14 00:36:37.378150 systemd-logind[1203]: New session 1 of user core. May 14 00:36:37.384903 systemd[1]: Finished user-runtime-dir@500.service. May 14 00:36:37.386228 systemd[1]: Starting user@500.service... May 14 00:36:37.388949 (systemd)[1288]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 14 00:36:37.446578 systemd[1288]: Queued start job for default target default.target. May 14 00:36:37.447067 systemd[1288]: Reached target paths.target. May 14 00:36:37.447098 systemd[1288]: Reached target sockets.target. May 14 00:36:37.447108 systemd[1288]: Reached target timers.target. May 14 00:36:37.447117 systemd[1288]: Reached target basic.target. May 14 00:36:37.447160 systemd[1288]: Reached target default.target. May 14 00:36:37.447182 systemd[1288]: Startup finished in 52ms. May 14 00:36:37.447241 systemd[1]: Started user@500.service. May 14 00:36:37.448180 systemd[1]: Started session-1.scope. May 14 00:36:37.498709 systemd[1]: Started sshd@1-10.0.0.47:22-10.0.0.1:39864.service. May 14 00:36:37.547616 sshd[1297]: Accepted publickey for core from 10.0.0.1 port 39864 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:36:37.549064 sshd[1297]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:36:37.553384 systemd[1]: Started session-2.scope. May 14 00:36:37.553634 systemd-logind[1203]: New session 2 of user core. May 14 00:36:37.605601 sshd[1297]: pam_unix(sshd:session): session closed for user core May 14 00:36:37.608039 systemd[1]: sshd@1-10.0.0.47:22-10.0.0.1:39864.service: Deactivated successfully. May 14 00:36:37.608597 systemd[1]: session-2.scope: Deactivated successfully. May 14 00:36:37.609117 systemd-logind[1203]: Session 2 logged out. Waiting for processes to exit. May 14 00:36:37.610041 systemd[1]: Started sshd@2-10.0.0.47:22-10.0.0.1:39866.service. May 14 00:36:37.610652 systemd-logind[1203]: Removed session 2. May 14 00:36:37.645528 sshd[1303]: Accepted publickey for core from 10.0.0.1 port 39866 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:36:37.646537 sshd[1303]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:36:37.649358 systemd-logind[1203]: New session 3 of user core. May 14 00:36:37.650072 systemd[1]: Started session-3.scope. May 14 00:36:37.697990 sshd[1303]: pam_unix(sshd:session): session closed for user core May 14 00:36:37.700242 systemd[1]: sshd@2-10.0.0.47:22-10.0.0.1:39866.service: Deactivated successfully. May 14 00:36:37.700723 systemd[1]: session-3.scope: Deactivated successfully. May 14 00:36:37.701190 systemd-logind[1203]: Session 3 logged out. Waiting for processes to exit. May 14 00:36:37.702132 systemd[1]: Started sshd@3-10.0.0.47:22-10.0.0.1:39876.service. May 14 00:36:37.702687 systemd-logind[1203]: Removed session 3. May 14 00:36:37.737280 sshd[1309]: Accepted publickey for core from 10.0.0.1 port 39876 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:36:37.738307 sshd[1309]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:36:37.741951 systemd[1]: Started session-4.scope. May 14 00:36:37.742204 systemd-logind[1203]: New session 4 of user core. May 14 00:36:37.793846 sshd[1309]: pam_unix(sshd:session): session closed for user core May 14 00:36:37.796049 systemd[1]: sshd@3-10.0.0.47:22-10.0.0.1:39876.service: Deactivated successfully. May 14 00:36:37.796548 systemd[1]: session-4.scope: Deactivated successfully. May 14 00:36:37.797127 systemd-logind[1203]: Session 4 logged out. Waiting for processes to exit. May 14 00:36:37.798084 systemd[1]: Started sshd@4-10.0.0.47:22-10.0.0.1:39886.service. May 14 00:36:37.798754 systemd-logind[1203]: Removed session 4. May 14 00:36:37.833282 sshd[1315]: Accepted publickey for core from 10.0.0.1 port 39886 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:36:37.834369 sshd[1315]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:36:37.837282 systemd-logind[1203]: New session 5 of user core. May 14 00:36:37.837996 systemd[1]: Started session-5.scope. May 14 00:36:37.894835 sudo[1318]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 14 00:36:37.895650 sudo[1318]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 14 00:36:37.948492 systemd[1]: Starting docker.service... May 14 00:36:38.032627 env[1329]: time="2025-05-14T00:36:38.032578058Z" level=info msg="Starting up" May 14 00:36:38.034062 env[1329]: time="2025-05-14T00:36:38.033978340Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 14 00:36:38.034146 env[1329]: time="2025-05-14T00:36:38.034133728Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 14 00:36:38.034231 env[1329]: time="2025-05-14T00:36:38.034215419Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 14 00:36:38.034291 env[1329]: time="2025-05-14T00:36:38.034278179Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 14 00:36:38.036214 env[1329]: time="2025-05-14T00:36:38.036190562Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 14 00:36:38.036214 env[1329]: time="2025-05-14T00:36:38.036211402Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 14 00:36:38.036317 env[1329]: time="2025-05-14T00:36:38.036225402Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 14 00:36:38.036317 env[1329]: time="2025-05-14T00:36:38.036234072Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 14 00:36:38.040150 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport520440786-merged.mount: Deactivated successfully. May 14 00:36:38.208551 env[1329]: time="2025-05-14T00:36:38.208178073Z" level=info msg="Loading containers: start." May 14 00:36:38.316901 kernel: Initializing XFRM netlink socket May 14 00:36:38.338502 env[1329]: time="2025-05-14T00:36:38.338466149Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 14 00:36:38.391045 systemd-networkd[1055]: docker0: Link UP May 14 00:36:38.416991 env[1329]: time="2025-05-14T00:36:38.416946971Z" level=info msg="Loading containers: done." May 14 00:36:38.432092 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1680141315-merged.mount: Deactivated successfully. May 14 00:36:38.433165 env[1329]: time="2025-05-14T00:36:38.433122559Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 14 00:36:38.433283 env[1329]: time="2025-05-14T00:36:38.433265459Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 14 00:36:38.433380 env[1329]: time="2025-05-14T00:36:38.433366837Z" level=info msg="Daemon has completed initialization" May 14 00:36:38.446847 systemd[1]: Started docker.service. May 14 00:36:38.452380 env[1329]: time="2025-05-14T00:36:38.452332012Z" level=info msg="API listen on /run/docker.sock" May 14 00:36:39.228640 env[1216]: time="2025-05-14T00:36:39.228593758Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 14 00:36:39.876326 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3343954194.mount: Deactivated successfully. May 14 00:36:41.498980 env[1216]: time="2025-05-14T00:36:41.498935684Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:36:41.500475 env[1216]: time="2025-05-14T00:36:41.500448397Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:36:41.504949 env[1216]: time="2025-05-14T00:36:41.504920433Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:36:41.506594 env[1216]: time="2025-05-14T00:36:41.506567909Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:36:41.507440 env[1216]: time="2025-05-14T00:36:41.507403582Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" May 14 00:36:41.517378 env[1216]: time="2025-05-14T00:36:41.517333827Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 14 00:36:43.507898 env[1216]: time="2025-05-14T00:36:43.507844450Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:36:43.509388 env[1216]: time="2025-05-14T00:36:43.509356251Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:36:43.511132 env[1216]: time="2025-05-14T00:36:43.511110661Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:36:43.512769 env[1216]: time="2025-05-14T00:36:43.512740755Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:36:43.513591 env[1216]: time="2025-05-14T00:36:43.513564895Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" May 14 00:36:43.522355 env[1216]: time="2025-05-14T00:36:43.522315603Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 14 00:36:44.948681 env[1216]: time="2025-05-14T00:36:44.948632173Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:36:44.950188 env[1216]: time="2025-05-14T00:36:44.950146683Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:36:44.951857 env[1216]: time="2025-05-14T00:36:44.951830559Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:36:44.954101 env[1216]: time="2025-05-14T00:36:44.954073719Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:36:44.954742 env[1216]: time="2025-05-14T00:36:44.954713715Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" May 14 00:36:44.963713 env[1216]: time="2025-05-14T00:36:44.963687629Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 14 00:36:45.148725 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 14 00:36:45.148902 systemd[1]: Stopped kubelet.service. May 14 00:36:45.150243 systemd[1]: Starting kubelet.service... May 14 00:36:45.234419 systemd[1]: Started kubelet.service. May 14 00:36:45.336402 kubelet[1489]: E0514 00:36:45.336348 1489 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:36:45.339302 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:36:45.339420 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:36:46.045334 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1339383841.mount: Deactivated successfully. May 14 00:36:46.590258 env[1216]: time="2025-05-14T00:36:46.590212191Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:36:46.591471 env[1216]: time="2025-05-14T00:36:46.591446126Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:36:46.592768 env[1216]: time="2025-05-14T00:36:46.592742938Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:36:46.593832 env[1216]: time="2025-05-14T00:36:46.593802097Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:36:46.594169 env[1216]: time="2025-05-14T00:36:46.594146300Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 14 00:36:46.603262 env[1216]: time="2025-05-14T00:36:46.603234921Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 14 00:36:47.108163 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3792020823.mount: Deactivated successfully. May 14 00:36:47.955592 env[1216]: time="2025-05-14T00:36:47.955537676Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:36:47.957163 env[1216]: time="2025-05-14T00:36:47.957135530Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:36:47.958745 env[1216]: time="2025-05-14T00:36:47.958722202Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:36:47.960593 env[1216]: time="2025-05-14T00:36:47.960564635Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:36:47.961477 env[1216]: time="2025-05-14T00:36:47.961446719Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 14 00:36:47.970127 env[1216]: time="2025-05-14T00:36:47.970100244Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 14 00:36:48.439558 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3925382270.mount: Deactivated successfully. May 14 00:36:48.443314 env[1216]: time="2025-05-14T00:36:48.443272766Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:36:48.444694 env[1216]: time="2025-05-14T00:36:48.444656165Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:36:48.446501 env[1216]: time="2025-05-14T00:36:48.445843340Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:36:48.447713 env[1216]: time="2025-05-14T00:36:48.447691082Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:36:48.448143 env[1216]: time="2025-05-14T00:36:48.448121434Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" May 14 00:36:48.456826 env[1216]: time="2025-05-14T00:36:48.456797268Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 14 00:36:48.978025 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount288269520.mount: Deactivated successfully. May 14 00:36:51.342752 env[1216]: time="2025-05-14T00:36:51.342696170Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:36:51.344600 env[1216]: time="2025-05-14T00:36:51.344568365Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:36:51.346712 env[1216]: time="2025-05-14T00:36:51.346678360Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:36:51.349374 env[1216]: time="2025-05-14T00:36:51.349344316Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:36:51.350368 env[1216]: time="2025-05-14T00:36:51.349803693Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" May 14 00:36:55.590136 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 14 00:36:55.590310 systemd[1]: Stopped kubelet.service. May 14 00:36:55.591671 systemd[1]: Starting kubelet.service... May 14 00:36:55.683681 systemd[1]: Started kubelet.service. May 14 00:36:55.722314 kubelet[1597]: E0514 00:36:55.722263 1597 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:36:55.724455 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:36:55.724579 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:36:57.132089 systemd[1]: Stopped kubelet.service. May 14 00:36:57.133961 systemd[1]: Starting kubelet.service... May 14 00:36:57.150724 systemd[1]: Reloading. May 14 00:36:57.196508 /usr/lib/systemd/system-generators/torcx-generator[1634]: time="2025-05-14T00:36:57Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 14 00:36:57.196535 /usr/lib/systemd/system-generators/torcx-generator[1634]: time="2025-05-14T00:36:57Z" level=info msg="torcx already run" May 14 00:36:57.309302 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 14 00:36:57.309445 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 14 00:36:57.324473 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:36:57.389976 systemd[1]: Started kubelet.service. May 14 00:36:57.391411 systemd[1]: Stopping kubelet.service... May 14 00:36:57.391776 systemd[1]: kubelet.service: Deactivated successfully. May 14 00:36:57.392056 systemd[1]: Stopped kubelet.service. May 14 00:36:57.393610 systemd[1]: Starting kubelet.service... May 14 00:36:57.472052 systemd[1]: Started kubelet.service. May 14 00:36:57.529158 kubelet[1677]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:36:57.529158 kubelet[1677]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 00:36:57.529158 kubelet[1677]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:36:57.530054 kubelet[1677]: I0514 00:36:57.530012 1677 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 00:36:57.947700 kubelet[1677]: I0514 00:36:57.947667 1677 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 14 00:36:57.947861 kubelet[1677]: I0514 00:36:57.947849 1677 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 00:36:57.948155 kubelet[1677]: I0514 00:36:57.948135 1677 server.go:927] "Client rotation is on, will bootstrap in background" May 14 00:36:57.982176 kubelet[1677]: I0514 00:36:57.982147 1677 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 00:36:57.983717 kubelet[1677]: E0514 00:36:57.983696 1677 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.47:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.47:6443: connect: connection refused May 14 00:36:57.991421 kubelet[1677]: I0514 00:36:57.991396 1677 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 00:36:57.992817 kubelet[1677]: I0514 00:36:57.992776 1677 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 00:36:57.993108 kubelet[1677]: I0514 00:36:57.992928 1677 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 14 00:36:57.993296 kubelet[1677]: I0514 00:36:57.993283 1677 topology_manager.go:138] "Creating topology manager with none policy" May 14 00:36:57.993354 kubelet[1677]: I0514 00:36:57.993345 1677 container_manager_linux.go:301] "Creating device plugin manager" May 14 00:36:57.993654 kubelet[1677]: I0514 00:36:57.993639 1677 state_mem.go:36] "Initialized new in-memory state store" May 14 00:36:57.994806 kubelet[1677]: I0514 00:36:57.994785 1677 kubelet.go:400] "Attempting to sync node with API server" May 14 00:36:57.994908 kubelet[1677]: I0514 00:36:57.994896 1677 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 00:36:57.995116 kubelet[1677]: I0514 00:36:57.995103 1677 kubelet.go:312] "Adding apiserver pod source" May 14 00:36:57.995183 kubelet[1677]: I0514 00:36:57.995172 1677 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 00:36:57.995506 kubelet[1677]: W0514 00:36:57.995446 1677 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused May 14 00:36:57.995506 kubelet[1677]: E0514 00:36:57.995504 1677 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused May 14 00:36:57.995667 kubelet[1677]: W0514 00:36:57.995625 1677 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.47:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused May 14 00:36:57.995704 kubelet[1677]: E0514 00:36:57.995668 1677 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.47:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused May 14 00:36:57.996395 kubelet[1677]: I0514 00:36:57.996358 1677 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 14 00:36:57.996688 kubelet[1677]: I0514 00:36:57.996675 1677 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 00:36:57.996781 kubelet[1677]: W0514 00:36:57.996771 1677 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 14 00:36:57.998341 kubelet[1677]: I0514 00:36:57.998311 1677 server.go:1264] "Started kubelet" May 14 00:36:57.998593 kubelet[1677]: I0514 00:36:57.998556 1677 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 00:36:58.000686 kubelet[1677]: I0514 00:36:58.000663 1677 server.go:455] "Adding debug handlers to kubelet server" May 14 00:36:58.002909 kubelet[1677]: I0514 00:36:58.002836 1677 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 00:36:58.003147 kubelet[1677]: I0514 00:36:58.003122 1677 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 00:36:58.004656 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 14 00:36:58.004854 kubelet[1677]: I0514 00:36:58.004835 1677 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 00:36:58.009184 kubelet[1677]: E0514 00:36:58.008970 1677 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.47:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.47:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f3db2a9620c9d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-14 00:36:57.998290077 +0000 UTC m=+0.522145019,LastTimestamp:2025-05-14 00:36:57.998290077 +0000 UTC m=+0.522145019,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 14 00:36:58.009406 kubelet[1677]: I0514 00:36:58.009383 1677 volume_manager.go:291] "Starting Kubelet Volume Manager" May 14 00:36:58.010088 kubelet[1677]: E0514 00:36:58.010047 1677 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.47:6443: connect: connection refused" interval="200ms" May 14 00:36:58.010160 kubelet[1677]: I0514 00:36:58.010114 1677 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 00:36:58.010515 kubelet[1677]: W0514 00:36:58.010475 1677 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused May 14 00:36:58.010563 kubelet[1677]: E0514 00:36:58.010523 1677 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused May 14 00:36:58.011043 kubelet[1677]: I0514 00:36:58.011010 1677 factory.go:221] Registration of the systemd container factory successfully May 14 00:36:58.011124 kubelet[1677]: I0514 00:36:58.011101 1677 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 00:36:58.011373 kubelet[1677]: I0514 00:36:58.011355 1677 reconciler.go:26] "Reconciler: start to sync state" May 14 00:36:58.012074 kubelet[1677]: E0514 00:36:58.012049 1677 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 00:36:58.014265 kubelet[1677]: I0514 00:36:58.012545 1677 factory.go:221] Registration of the containerd container factory successfully May 14 00:36:58.025530 kubelet[1677]: I0514 00:36:58.025507 1677 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 00:36:58.025530 kubelet[1677]: I0514 00:36:58.025527 1677 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 00:36:58.025635 kubelet[1677]: I0514 00:36:58.025544 1677 state_mem.go:36] "Initialized new in-memory state store" May 14 00:36:58.026627 kubelet[1677]: I0514 00:36:58.026602 1677 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 00:36:58.027709 kubelet[1677]: I0514 00:36:58.027693 1677 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 00:36:58.027824 kubelet[1677]: I0514 00:36:58.027812 1677 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 00:36:58.027900 kubelet[1677]: I0514 00:36:58.027889 1677 kubelet.go:2337] "Starting kubelet main sync loop" May 14 00:36:58.028020 kubelet[1677]: E0514 00:36:58.028002 1677 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 00:36:58.028667 kubelet[1677]: W0514 00:36:58.028642 1677 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused May 14 00:36:58.028784 kubelet[1677]: E0514 00:36:58.028771 1677 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused May 14 00:36:58.095003 kubelet[1677]: I0514 00:36:58.094961 1677 policy_none.go:49] "None policy: Start" May 14 00:36:58.095692 kubelet[1677]: I0514 00:36:58.095673 1677 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 00:36:58.095789 kubelet[1677]: I0514 00:36:58.095778 1677 state_mem.go:35] "Initializing new in-memory state store" May 14 00:36:58.100429 systemd[1]: Created slice kubepods.slice. May 14 00:36:58.104340 systemd[1]: Created slice kubepods-burstable.slice. May 14 00:36:58.106899 systemd[1]: Created slice kubepods-besteffort.slice. May 14 00:36:58.110652 kubelet[1677]: I0514 00:36:58.110629 1677 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 00:36:58.111081 kubelet[1677]: E0514 00:36:58.111055 1677 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.47:6443/api/v1/nodes\": dial tcp 10.0.0.47:6443: connect: connection refused" node="localhost" May 14 00:36:58.120696 kubelet[1677]: I0514 00:36:58.120678 1677 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 00:36:58.121140 kubelet[1677]: I0514 00:36:58.121106 1677 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 00:36:58.121520 kubelet[1677]: I0514 00:36:58.121506 1677 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 00:36:58.122202 kubelet[1677]: E0514 00:36:58.122078 1677 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 14 00:36:58.128208 kubelet[1677]: I0514 00:36:58.128173 1677 topology_manager.go:215] "Topology Admit Handler" podUID="78ea7e68617b935b82f4d8ed154e1033" podNamespace="kube-system" podName="kube-apiserver-localhost" May 14 00:36:58.129135 kubelet[1677]: I0514 00:36:58.129111 1677 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 14 00:36:58.129852 kubelet[1677]: I0514 00:36:58.129827 1677 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 14 00:36:58.134320 systemd[1]: Created slice kubepods-burstable-pod78ea7e68617b935b82f4d8ed154e1033.slice. May 14 00:36:58.157204 systemd[1]: Created slice kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice. May 14 00:36:58.160083 systemd[1]: Created slice kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice. May 14 00:36:58.211214 kubelet[1677]: E0514 00:36:58.210452 1677 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.47:6443: connect: connection refused" interval="400ms" May 14 00:36:58.212642 kubelet[1677]: I0514 00:36:58.212611 1677 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/78ea7e68617b935b82f4d8ed154e1033-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"78ea7e68617b935b82f4d8ed154e1033\") " pod="kube-system/kube-apiserver-localhost" May 14 00:36:58.212714 kubelet[1677]: I0514 00:36:58.212646 1677 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:36:58.212714 kubelet[1677]: I0514 00:36:58.212666 1677 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:36:58.212714 kubelet[1677]: I0514 00:36:58.212681 1677 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 14 00:36:58.212714 kubelet[1677]: I0514 00:36:58.212695 1677 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/78ea7e68617b935b82f4d8ed154e1033-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"78ea7e68617b935b82f4d8ed154e1033\") " pod="kube-system/kube-apiserver-localhost" May 14 00:36:58.212714 kubelet[1677]: I0514 00:36:58.212709 1677 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/78ea7e68617b935b82f4d8ed154e1033-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"78ea7e68617b935b82f4d8ed154e1033\") " pod="kube-system/kube-apiserver-localhost" May 14 00:36:58.212844 kubelet[1677]: I0514 00:36:58.212722 1677 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:36:58.212844 kubelet[1677]: I0514 00:36:58.212737 1677 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:36:58.212844 kubelet[1677]: I0514 00:36:58.212750 1677 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:36:58.312566 kubelet[1677]: I0514 00:36:58.312534 1677 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 00:36:58.312791 kubelet[1677]: E0514 00:36:58.312760 1677 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.47:6443/api/v1/nodes\": dial tcp 10.0.0.47:6443: connect: connection refused" node="localhost" May 14 00:36:58.455522 kubelet[1677]: E0514 00:36:58.455485 1677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:36:58.456223 env[1216]: time="2025-05-14T00:36:58.456177230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:78ea7e68617b935b82f4d8ed154e1033,Namespace:kube-system,Attempt:0,}" May 14 00:36:58.459397 kubelet[1677]: E0514 00:36:58.459375 1677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:36:58.459766 env[1216]: time="2025-05-14T00:36:58.459723749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 14 00:36:58.462436 kubelet[1677]: E0514 00:36:58.462353 1677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:36:58.462944 env[1216]: time="2025-05-14T00:36:58.462911409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 14 00:36:58.612743 kubelet[1677]: E0514 00:36:58.612678 1677 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.47:6443: connect: connection refused" interval="800ms" May 14 00:36:58.714125 kubelet[1677]: I0514 00:36:58.714047 1677 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 00:36:58.714350 kubelet[1677]: E0514 00:36:58.714319 1677 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.47:6443/api/v1/nodes\": dial tcp 10.0.0.47:6443: connect: connection refused" node="localhost" May 14 00:36:59.019607 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2643024741.mount: Deactivated successfully. May 14 00:36:59.024466 env[1216]: time="2025-05-14T00:36:59.024423643Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:36:59.025418 env[1216]: time="2025-05-14T00:36:59.025381112Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:36:59.026087 env[1216]: time="2025-05-14T00:36:59.026058678Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:36:59.028304 env[1216]: time="2025-05-14T00:36:59.028274272Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:36:59.030319 env[1216]: time="2025-05-14T00:36:59.030286936Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:36:59.036259 env[1216]: time="2025-05-14T00:36:59.036220524Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:36:59.036963 env[1216]: time="2025-05-14T00:36:59.036938916Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:36:59.039965 env[1216]: time="2025-05-14T00:36:59.039933800Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:36:59.042141 env[1216]: time="2025-05-14T00:36:59.042114327Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:36:59.042936 env[1216]: time="2025-05-14T00:36:59.042910811Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:36:59.043817 env[1216]: time="2025-05-14T00:36:59.043774433Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:36:59.044701 env[1216]: time="2025-05-14T00:36:59.044674561Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:36:59.073907 env[1216]: time="2025-05-14T00:36:59.073824163Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:36:59.074014 env[1216]: time="2025-05-14T00:36:59.073885901Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:36:59.074014 env[1216]: time="2025-05-14T00:36:59.073898377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:36:59.074150 env[1216]: time="2025-05-14T00:36:59.074114023Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8fa8a532d2438424abd14c49a0d2e3a766c66a6937af26c97c3959710ba7b39d pid=1733 runtime=io.containerd.runc.v2 May 14 00:36:59.074640 env[1216]: time="2025-05-14T00:36:59.074543314Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:36:59.074640 env[1216]: time="2025-05-14T00:36:59.074572784Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:36:59.074640 env[1216]: time="2025-05-14T00:36:59.074582581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:36:59.075118 env[1216]: time="2025-05-14T00:36:59.075077410Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7ceb86c779cc03d195618c830e7702e6f27e9806c3ca9a605dd55b90c14bc5a2 pid=1734 runtime=io.containerd.runc.v2 May 14 00:36:59.076910 env[1216]: time="2025-05-14T00:36:59.076845918Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:36:59.077012 env[1216]: time="2025-05-14T00:36:59.076891662Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:36:59.077012 env[1216]: time="2025-05-14T00:36:59.076919053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:36:59.077111 env[1216]: time="2025-05-14T00:36:59.077079237Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5827cc7becbe9cf3dba8443028b058896ae579c941114f322d1750e4c5b3478f pid=1740 runtime=io.containerd.runc.v2 May 14 00:36:59.085749 systemd[1]: Started cri-containerd-8fa8a532d2438424abd14c49a0d2e3a766c66a6937af26c97c3959710ba7b39d.scope. May 14 00:36:59.090098 systemd[1]: Started cri-containerd-7ceb86c779cc03d195618c830e7702e6f27e9806c3ca9a605dd55b90c14bc5a2.scope. May 14 00:36:59.104478 systemd[1]: Started cri-containerd-5827cc7becbe9cf3dba8443028b058896ae579c941114f322d1750e4c5b3478f.scope. May 14 00:36:59.151986 env[1216]: time="2025-05-14T00:36:59.151948871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ceb86c779cc03d195618c830e7702e6f27e9806c3ca9a605dd55b90c14bc5a2\"" May 14 00:36:59.153680 kubelet[1677]: E0514 00:36:59.153650 1677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:36:59.156665 env[1216]: time="2025-05-14T00:36:59.156629213Z" level=info msg="CreateContainer within sandbox \"7ceb86c779cc03d195618c830e7702e6f27e9806c3ca9a605dd55b90c14bc5a2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 14 00:36:59.160533 env[1216]: time="2025-05-14T00:36:59.160495236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"8fa8a532d2438424abd14c49a0d2e3a766c66a6937af26c97c3959710ba7b39d\"" May 14 00:36:59.161122 kubelet[1677]: E0514 00:36:59.161096 1677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:36:59.165326 env[1216]: time="2025-05-14T00:36:59.162970101Z" level=info msg="CreateContainer within sandbox \"8fa8a532d2438424abd14c49a0d2e3a766c66a6937af26c97c3959710ba7b39d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 14 00:36:59.166979 env[1216]: time="2025-05-14T00:36:59.166947086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:78ea7e68617b935b82f4d8ed154e1033,Namespace:kube-system,Attempt:0,} returns sandbox id \"5827cc7becbe9cf3dba8443028b058896ae579c941114f322d1750e4c5b3478f\"" May 14 00:36:59.168059 kubelet[1677]: E0514 00:36:59.167900 1677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:36:59.169916 env[1216]: time="2025-05-14T00:36:59.169865956Z" level=info msg="CreateContainer within sandbox \"5827cc7becbe9cf3dba8443028b058896ae579c941114f322d1750e4c5b3478f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 14 00:36:59.177238 env[1216]: time="2025-05-14T00:36:59.177193503Z" level=info msg="CreateContainer within sandbox \"7ceb86c779cc03d195618c830e7702e6f27e9806c3ca9a605dd55b90c14bc5a2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2ad4df06cb9ff6937ad83377e243076e26ed0454d972b208eb47f248ea4680d2\"" May 14 00:36:59.177839 env[1216]: time="2025-05-14T00:36:59.177784259Z" level=info msg="StartContainer for \"2ad4df06cb9ff6937ad83377e243076e26ed0454d972b208eb47f248ea4680d2\"" May 14 00:36:59.181614 env[1216]: time="2025-05-14T00:36:59.181581146Z" level=info msg="CreateContainer within sandbox \"8fa8a532d2438424abd14c49a0d2e3a766c66a6937af26c97c3959710ba7b39d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5edbea287361c35ab7a04c53d76cf36ff8cd96363e1958480140cdc6393c54e5\"" May 14 00:36:59.182029 env[1216]: time="2025-05-14T00:36:59.181997162Z" level=info msg="StartContainer for \"5edbea287361c35ab7a04c53d76cf36ff8cd96363e1958480140cdc6393c54e5\"" May 14 00:36:59.186285 env[1216]: time="2025-05-14T00:36:59.186240095Z" level=info msg="CreateContainer within sandbox \"5827cc7becbe9cf3dba8443028b058896ae579c941114f322d1750e4c5b3478f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8a0aba7b6174f6f5f96e2dc46324ff1a7a3f781dd9573a9c916583a48749ea3d\"" May 14 00:36:59.186758 env[1216]: time="2025-05-14T00:36:59.186732245Z" level=info msg="StartContainer for \"8a0aba7b6174f6f5f96e2dc46324ff1a7a3f781dd9573a9c916583a48749ea3d\"" May 14 00:36:59.193563 systemd[1]: Started cri-containerd-2ad4df06cb9ff6937ad83377e243076e26ed0454d972b208eb47f248ea4680d2.scope. May 14 00:36:59.202701 systemd[1]: Started cri-containerd-5edbea287361c35ab7a04c53d76cf36ff8cd96363e1958480140cdc6393c54e5.scope. May 14 00:36:59.217113 systemd[1]: Started cri-containerd-8a0aba7b6174f6f5f96e2dc46324ff1a7a3f781dd9573a9c916583a48749ea3d.scope. May 14 00:36:59.230485 kubelet[1677]: W0514 00:36:59.230209 1677 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused May 14 00:36:59.230485 kubelet[1677]: E0514 00:36:59.230278 1677 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused May 14 00:36:59.278600 env[1216]: time="2025-05-14T00:36:59.276765956Z" level=info msg="StartContainer for \"5edbea287361c35ab7a04c53d76cf36ff8cd96363e1958480140cdc6393c54e5\" returns successfully" May 14 00:36:59.278882 env[1216]: time="2025-05-14T00:36:59.278828562Z" level=info msg="StartContainer for \"8a0aba7b6174f6f5f96e2dc46324ff1a7a3f781dd9573a9c916583a48749ea3d\" returns successfully" May 14 00:36:59.283228 env[1216]: time="2025-05-14T00:36:59.279963730Z" level=info msg="StartContainer for \"2ad4df06cb9ff6937ad83377e243076e26ed0454d972b208eb47f248ea4680d2\" returns successfully" May 14 00:36:59.320163 kubelet[1677]: W0514 00:36:59.320097 1677 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused May 14 00:36:59.320245 kubelet[1677]: E0514 00:36:59.320171 1677 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused May 14 00:36:59.413926 kubelet[1677]: E0514 00:36:59.413855 1677 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.47:6443: connect: connection refused" interval="1.6s" May 14 00:36:59.515460 kubelet[1677]: I0514 00:36:59.515425 1677 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 00:37:00.034115 kubelet[1677]: E0514 00:37:00.034006 1677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:00.036048 kubelet[1677]: E0514 00:37:00.036026 1677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:00.037609 kubelet[1677]: E0514 00:37:00.037585 1677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:01.039509 kubelet[1677]: E0514 00:37:01.039476 1677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:01.325719 kubelet[1677]: E0514 00:37:01.325630 1677 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 14 00:37:01.466976 kubelet[1677]: E0514 00:37:01.466890 1677 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.183f3db2a9620c9d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-14 00:36:57.998290077 +0000 UTC m=+0.522145019,LastTimestamp:2025-05-14 00:36:57.998290077 +0000 UTC m=+0.522145019,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 14 00:37:01.518062 kubelet[1677]: I0514 00:37:01.518034 1677 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 14 00:37:01.525582 kubelet[1677]: E0514 00:37:01.525503 1677 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.183f3db2aa33acee default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-14 00:36:58.012028142 +0000 UTC m=+0.535883084,LastTimestamp:2025-05-14 00:36:58.012028142 +0000 UTC m=+0.535883084,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 14 00:37:01.997689 kubelet[1677]: I0514 00:37:01.997656 1677 apiserver.go:52] "Watching apiserver" May 14 00:37:02.010288 kubelet[1677]: I0514 00:37:02.010256 1677 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 00:37:02.045666 kubelet[1677]: E0514 00:37:02.045638 1677 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 14 00:37:02.046387 kubelet[1677]: E0514 00:37:02.046368 1677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:03.447664 systemd[1]: Reloading. May 14 00:37:03.490082 /usr/lib/systemd/system-generators/torcx-generator[1977]: time="2025-05-14T00:37:03Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 14 00:37:03.490109 /usr/lib/systemd/system-generators/torcx-generator[1977]: time="2025-05-14T00:37:03Z" level=info msg="torcx already run" May 14 00:37:03.541950 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 14 00:37:03.541967 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 14 00:37:03.557289 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:37:03.586816 kubelet[1677]: E0514 00:37:03.586786 1677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:03.642835 systemd[1]: Stopping kubelet.service... May 14 00:37:03.662264 systemd[1]: kubelet.service: Deactivated successfully. May 14 00:37:03.662452 systemd[1]: Stopped kubelet.service. May 14 00:37:03.663950 systemd[1]: Starting kubelet.service... May 14 00:37:03.743805 systemd[1]: Started kubelet.service. May 14 00:37:03.784717 kubelet[2019]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:37:03.784717 kubelet[2019]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 00:37:03.784717 kubelet[2019]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:37:03.785072 kubelet[2019]: I0514 00:37:03.784758 2019 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 00:37:03.788816 kubelet[2019]: I0514 00:37:03.788788 2019 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 14 00:37:03.788816 kubelet[2019]: I0514 00:37:03.788814 2019 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 00:37:03.789018 kubelet[2019]: I0514 00:37:03.788995 2019 server.go:927] "Client rotation is on, will bootstrap in background" May 14 00:37:03.790261 kubelet[2019]: I0514 00:37:03.790241 2019 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 14 00:37:03.791345 kubelet[2019]: I0514 00:37:03.791324 2019 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 00:37:03.796187 kubelet[2019]: I0514 00:37:03.796170 2019 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 00:37:03.796373 kubelet[2019]: I0514 00:37:03.796352 2019 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 00:37:03.796545 kubelet[2019]: I0514 00:37:03.796409 2019 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 14 00:37:03.796627 kubelet[2019]: I0514 00:37:03.796553 2019 topology_manager.go:138] "Creating topology manager with none policy" May 14 00:37:03.796627 kubelet[2019]: I0514 00:37:03.796562 2019 container_manager_linux.go:301] "Creating device plugin manager" May 14 00:37:03.796627 kubelet[2019]: I0514 00:37:03.796591 2019 state_mem.go:36] "Initialized new in-memory state store" May 14 00:37:03.796715 kubelet[2019]: I0514 00:37:03.796683 2019 kubelet.go:400] "Attempting to sync node with API server" May 14 00:37:03.796715 kubelet[2019]: I0514 00:37:03.796694 2019 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 00:37:03.796763 kubelet[2019]: I0514 00:37:03.796717 2019 kubelet.go:312] "Adding apiserver pod source" May 14 00:37:03.796763 kubelet[2019]: I0514 00:37:03.796732 2019 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 00:37:03.797220 kubelet[2019]: I0514 00:37:03.797193 2019 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 14 00:37:03.797370 kubelet[2019]: I0514 00:37:03.797346 2019 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 00:37:03.800296 kubelet[2019]: I0514 00:37:03.797783 2019 server.go:1264] "Started kubelet" May 14 00:37:03.801604 kubelet[2019]: I0514 00:37:03.801555 2019 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 00:37:03.801762 kubelet[2019]: I0514 00:37:03.801742 2019 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 00:37:03.801811 kubelet[2019]: I0514 00:37:03.801778 2019 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 00:37:03.807400 kubelet[2019]: I0514 00:37:03.807383 2019 server.go:455] "Adding debug handlers to kubelet server" May 14 00:37:03.814326 kubelet[2019]: I0514 00:37:03.814283 2019 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 00:37:03.816768 kubelet[2019]: E0514 00:37:03.816734 2019 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 00:37:03.816854 kubelet[2019]: I0514 00:37:03.816821 2019 volume_manager.go:291] "Starting Kubelet Volume Manager" May 14 00:37:03.816934 kubelet[2019]: I0514 00:37:03.816912 2019 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 00:37:03.817044 kubelet[2019]: I0514 00:37:03.817024 2019 reconciler.go:26] "Reconciler: start to sync state" May 14 00:37:03.818463 kubelet[2019]: I0514 00:37:03.818436 2019 factory.go:221] Registration of the systemd container factory successfully May 14 00:37:03.818549 kubelet[2019]: I0514 00:37:03.818526 2019 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 00:37:03.819584 kubelet[2019]: I0514 00:37:03.819542 2019 factory.go:221] Registration of the containerd container factory successfully May 14 00:37:03.824036 kubelet[2019]: I0514 00:37:03.823999 2019 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 00:37:03.824882 kubelet[2019]: I0514 00:37:03.824835 2019 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 00:37:03.824943 kubelet[2019]: I0514 00:37:03.824886 2019 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 00:37:03.824943 kubelet[2019]: I0514 00:37:03.824905 2019 kubelet.go:2337] "Starting kubelet main sync loop" May 14 00:37:03.825217 kubelet[2019]: E0514 00:37:03.825192 2019 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 00:37:03.855585 kubelet[2019]: I0514 00:37:03.855555 2019 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 00:37:03.855585 kubelet[2019]: I0514 00:37:03.855578 2019 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 00:37:03.855585 kubelet[2019]: I0514 00:37:03.855597 2019 state_mem.go:36] "Initialized new in-memory state store" May 14 00:37:03.855842 kubelet[2019]: I0514 00:37:03.855728 2019 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 14 00:37:03.855842 kubelet[2019]: I0514 00:37:03.855768 2019 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 14 00:37:03.855842 kubelet[2019]: I0514 00:37:03.855786 2019 policy_none.go:49] "None policy: Start" May 14 00:37:03.856453 kubelet[2019]: I0514 00:37:03.856432 2019 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 00:37:03.856453 kubelet[2019]: I0514 00:37:03.856456 2019 state_mem.go:35] "Initializing new in-memory state store" May 14 00:37:03.856622 kubelet[2019]: I0514 00:37:03.856607 2019 state_mem.go:75] "Updated machine memory state" May 14 00:37:03.861923 kubelet[2019]: I0514 00:37:03.860208 2019 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 00:37:03.862705 kubelet[2019]: I0514 00:37:03.862667 2019 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 00:37:03.863086 kubelet[2019]: I0514 00:37:03.863070 2019 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 00:37:03.926115 kubelet[2019]: I0514 00:37:03.926077 2019 topology_manager.go:215] "Topology Admit Handler" podUID="78ea7e68617b935b82f4d8ed154e1033" podNamespace="kube-system" podName="kube-apiserver-localhost" May 14 00:37:03.926199 kubelet[2019]: I0514 00:37:03.926182 2019 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 14 00:37:03.926279 kubelet[2019]: I0514 00:37:03.926244 2019 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 14 00:37:03.932107 kubelet[2019]: E0514 00:37:03.932066 2019 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 14 00:37:03.971356 kubelet[2019]: I0514 00:37:03.971330 2019 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 00:37:03.977000 kubelet[2019]: I0514 00:37:03.976965 2019 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 14 00:37:03.977111 kubelet[2019]: I0514 00:37:03.977051 2019 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 14 00:37:04.118671 kubelet[2019]: I0514 00:37:04.118568 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:37:04.118853 kubelet[2019]: I0514 00:37:04.118830 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 14 00:37:04.118964 kubelet[2019]: I0514 00:37:04.118947 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/78ea7e68617b935b82f4d8ed154e1033-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"78ea7e68617b935b82f4d8ed154e1033\") " pod="kube-system/kube-apiserver-localhost" May 14 00:37:04.119042 kubelet[2019]: I0514 00:37:04.119028 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/78ea7e68617b935b82f4d8ed154e1033-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"78ea7e68617b935b82f4d8ed154e1033\") " pod="kube-system/kube-apiserver-localhost" May 14 00:37:04.119123 kubelet[2019]: I0514 00:37:04.119109 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:37:04.119196 kubelet[2019]: I0514 00:37:04.119184 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:37:04.119280 kubelet[2019]: I0514 00:37:04.119266 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/78ea7e68617b935b82f4d8ed154e1033-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"78ea7e68617b935b82f4d8ed154e1033\") " pod="kube-system/kube-apiserver-localhost" May 14 00:37:04.119348 kubelet[2019]: I0514 00:37:04.119336 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:37:04.119422 kubelet[2019]: I0514 00:37:04.119409 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:37:04.232444 kubelet[2019]: E0514 00:37:04.232258 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:04.233054 kubelet[2019]: E0514 00:37:04.232835 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:04.233054 kubelet[2019]: E0514 00:37:04.232868 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:04.503850 sudo[2053]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 14 00:37:04.504519 sudo[2053]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) May 14 00:37:04.797918 kubelet[2019]: I0514 00:37:04.797803 2019 apiserver.go:52] "Watching apiserver" May 14 00:37:04.817483 kubelet[2019]: I0514 00:37:04.817460 2019 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 00:37:04.851618 kubelet[2019]: E0514 00:37:04.851588 2019 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 14 00:37:04.852013 kubelet[2019]: E0514 00:37:04.851991 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:04.853303 kubelet[2019]: E0514 00:37:04.853096 2019 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 14 00:37:04.853528 kubelet[2019]: E0514 00:37:04.853509 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:04.853673 kubelet[2019]: E0514 00:37:04.853653 2019 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 14 00:37:04.853864 kubelet[2019]: E0514 00:37:04.853851 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:04.871050 kubelet[2019]: I0514 00:37:04.870966 2019 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.870952372 podStartE2EDuration="1.870952372s" podCreationTimestamp="2025-05-14 00:37:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:37:04.864110546 +0000 UTC m=+1.115791633" watchObservedRunningTime="2025-05-14 00:37:04.870952372 +0000 UTC m=+1.122633459" May 14 00:37:04.871344 kubelet[2019]: I0514 00:37:04.871308 2019 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.871299751 podStartE2EDuration="1.871299751s" podCreationTimestamp="2025-05-14 00:37:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:37:04.870361837 +0000 UTC m=+1.122042924" watchObservedRunningTime="2025-05-14 00:37:04.871299751 +0000 UTC m=+1.122980838" May 14 00:37:04.876348 kubelet[2019]: I0514 00:37:04.876309 2019 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.876300224 podStartE2EDuration="1.876300224s" podCreationTimestamp="2025-05-14 00:37:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:37:04.876295585 +0000 UTC m=+1.127976672" watchObservedRunningTime="2025-05-14 00:37:04.876300224 +0000 UTC m=+1.127981311" May 14 00:37:04.965753 sudo[2053]: pam_unix(sudo:session): session closed for user root May 14 00:37:05.839592 kubelet[2019]: E0514 00:37:05.839548 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:05.839935 kubelet[2019]: E0514 00:37:05.839820 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:05.840223 kubelet[2019]: E0514 00:37:05.840203 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:06.397115 sudo[1318]: pam_unix(sudo:session): session closed for user root May 14 00:37:06.398599 sshd[1315]: pam_unix(sshd:session): session closed for user core May 14 00:37:06.400842 systemd[1]: session-5.scope: Deactivated successfully. May 14 00:37:06.401038 systemd[1]: session-5.scope: Consumed 7.450s CPU time. May 14 00:37:06.401420 systemd[1]: sshd@4-10.0.0.47:22-10.0.0.1:39886.service: Deactivated successfully. May 14 00:37:06.402316 systemd-logind[1203]: Session 5 logged out. Waiting for processes to exit. May 14 00:37:06.402951 systemd-logind[1203]: Removed session 5. May 14 00:37:06.840429 kubelet[2019]: E0514 00:37:06.840316 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:06.895674 kubelet[2019]: E0514 00:37:06.895647 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:09.038425 kubelet[2019]: E0514 00:37:09.038395 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:09.845703 kubelet[2019]: E0514 00:37:09.845144 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:11.799713 kubelet[2019]: E0514 00:37:11.799673 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:11.848297 kubelet[2019]: E0514 00:37:11.848260 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:12.848783 kubelet[2019]: E0514 00:37:12.848750 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:16.903632 kubelet[2019]: E0514 00:37:16.902947 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:18.012909 update_engine[1207]: I0514 00:37:18.012849 1207 update_attempter.cc:509] Updating boot flags... May 14 00:37:18.775116 kubelet[2019]: I0514 00:37:18.775073 2019 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 14 00:37:18.775715 env[1216]: time="2025-05-14T00:37:18.775676198Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 14 00:37:18.776134 kubelet[2019]: I0514 00:37:18.776116 2019 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 14 00:37:19.827519 kubelet[2019]: I0514 00:37:19.827483 2019 topology_manager.go:215] "Topology Admit Handler" podUID="c66dccec-ada8-4619-a5aa-b2b60b314eeb" podNamespace="kube-system" podName="kube-proxy-6w7xv" May 14 00:37:19.832261 kubelet[2019]: I0514 00:37:19.832230 2019 topology_manager.go:215] "Topology Admit Handler" podUID="700f1054-d2a0-48a6-85f3-aeb90e95832a" podNamespace="kube-system" podName="cilium-69wh4" May 14 00:37:19.832733 systemd[1]: Created slice kubepods-besteffort-podc66dccec_ada8_4619_a5aa_b2b60b314eeb.slice. May 14 00:37:19.841666 systemd[1]: Created slice kubepods-burstable-pod700f1054_d2a0_48a6_85f3_aeb90e95832a.slice. May 14 00:37:19.872729 kubelet[2019]: I0514 00:37:19.872677 2019 topology_manager.go:215] "Topology Admit Handler" podUID="efd62912-b8d5-4e30-bff1-ff187229b969" podNamespace="kube-system" podName="cilium-operator-599987898-s5b8m" May 14 00:37:19.877978 systemd[1]: Created slice kubepods-besteffort-podefd62912_b8d5_4e30_bff1_ff187229b969.slice. May 14 00:37:19.919977 kubelet[2019]: I0514 00:37:19.919918 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/700f1054-d2a0-48a6-85f3-aeb90e95832a-cilium-config-path\") pod \"cilium-69wh4\" (UID: \"700f1054-d2a0-48a6-85f3-aeb90e95832a\") " pod="kube-system/cilium-69wh4" May 14 00:37:19.919977 kubelet[2019]: I0514 00:37:19.919964 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c66dccec-ada8-4619-a5aa-b2b60b314eeb-xtables-lock\") pod \"kube-proxy-6w7xv\" (UID: \"c66dccec-ada8-4619-a5aa-b2b60b314eeb\") " pod="kube-system/kube-proxy-6w7xv" May 14 00:37:19.919977 kubelet[2019]: I0514 00:37:19.919984 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/700f1054-d2a0-48a6-85f3-aeb90e95832a-cni-path\") pod \"cilium-69wh4\" (UID: \"700f1054-d2a0-48a6-85f3-aeb90e95832a\") " pod="kube-system/cilium-69wh4" May 14 00:37:19.920193 kubelet[2019]: I0514 00:37:19.920001 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c66dccec-ada8-4619-a5aa-b2b60b314eeb-lib-modules\") pod \"kube-proxy-6w7xv\" (UID: \"c66dccec-ada8-4619-a5aa-b2b60b314eeb\") " pod="kube-system/kube-proxy-6w7xv" May 14 00:37:19.920193 kubelet[2019]: I0514 00:37:19.920018 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9ktp\" (UniqueName: \"kubernetes.io/projected/700f1054-d2a0-48a6-85f3-aeb90e95832a-kube-api-access-b9ktp\") pod \"cilium-69wh4\" (UID: \"700f1054-d2a0-48a6-85f3-aeb90e95832a\") " pod="kube-system/cilium-69wh4" May 14 00:37:19.920193 kubelet[2019]: I0514 00:37:19.920033 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/700f1054-d2a0-48a6-85f3-aeb90e95832a-hubble-tls\") pod \"cilium-69wh4\" (UID: \"700f1054-d2a0-48a6-85f3-aeb90e95832a\") " pod="kube-system/cilium-69wh4" May 14 00:37:19.920193 kubelet[2019]: I0514 00:37:19.920052 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/700f1054-d2a0-48a6-85f3-aeb90e95832a-xtables-lock\") pod \"cilium-69wh4\" (UID: \"700f1054-d2a0-48a6-85f3-aeb90e95832a\") " pod="kube-system/cilium-69wh4" May 14 00:37:19.920193 kubelet[2019]: I0514 00:37:19.920067 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c66dccec-ada8-4619-a5aa-b2b60b314eeb-kube-proxy\") pod \"kube-proxy-6w7xv\" (UID: \"c66dccec-ada8-4619-a5aa-b2b60b314eeb\") " pod="kube-system/kube-proxy-6w7xv" May 14 00:37:19.920193 kubelet[2019]: I0514 00:37:19.920081 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/700f1054-d2a0-48a6-85f3-aeb90e95832a-bpf-maps\") pod \"cilium-69wh4\" (UID: \"700f1054-d2a0-48a6-85f3-aeb90e95832a\") " pod="kube-system/cilium-69wh4" May 14 00:37:19.920335 kubelet[2019]: I0514 00:37:19.920098 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/700f1054-d2a0-48a6-85f3-aeb90e95832a-clustermesh-secrets\") pod \"cilium-69wh4\" (UID: \"700f1054-d2a0-48a6-85f3-aeb90e95832a\") " pod="kube-system/cilium-69wh4" May 14 00:37:19.920335 kubelet[2019]: I0514 00:37:19.920113 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/700f1054-d2a0-48a6-85f3-aeb90e95832a-host-proc-sys-kernel\") pod \"cilium-69wh4\" (UID: \"700f1054-d2a0-48a6-85f3-aeb90e95832a\") " pod="kube-system/cilium-69wh4" May 14 00:37:19.920335 kubelet[2019]: I0514 00:37:19.920139 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w72xk\" (UniqueName: \"kubernetes.io/projected/efd62912-b8d5-4e30-bff1-ff187229b969-kube-api-access-w72xk\") pod \"cilium-operator-599987898-s5b8m\" (UID: \"efd62912-b8d5-4e30-bff1-ff187229b969\") " pod="kube-system/cilium-operator-599987898-s5b8m" May 14 00:37:19.920335 kubelet[2019]: I0514 00:37:19.920157 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/700f1054-d2a0-48a6-85f3-aeb90e95832a-cilium-run\") pod \"cilium-69wh4\" (UID: \"700f1054-d2a0-48a6-85f3-aeb90e95832a\") " pod="kube-system/cilium-69wh4" May 14 00:37:19.920335 kubelet[2019]: I0514 00:37:19.920173 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/efd62912-b8d5-4e30-bff1-ff187229b969-cilium-config-path\") pod \"cilium-operator-599987898-s5b8m\" (UID: \"efd62912-b8d5-4e30-bff1-ff187229b969\") " pod="kube-system/cilium-operator-599987898-s5b8m" May 14 00:37:19.920479 kubelet[2019]: I0514 00:37:19.920188 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnr4n\" (UniqueName: \"kubernetes.io/projected/c66dccec-ada8-4619-a5aa-b2b60b314eeb-kube-api-access-vnr4n\") pod \"kube-proxy-6w7xv\" (UID: \"c66dccec-ada8-4619-a5aa-b2b60b314eeb\") " pod="kube-system/kube-proxy-6w7xv" May 14 00:37:19.920479 kubelet[2019]: I0514 00:37:19.920203 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/700f1054-d2a0-48a6-85f3-aeb90e95832a-hostproc\") pod \"cilium-69wh4\" (UID: \"700f1054-d2a0-48a6-85f3-aeb90e95832a\") " pod="kube-system/cilium-69wh4" May 14 00:37:19.920479 kubelet[2019]: I0514 00:37:19.920216 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/700f1054-d2a0-48a6-85f3-aeb90e95832a-etc-cni-netd\") pod \"cilium-69wh4\" (UID: \"700f1054-d2a0-48a6-85f3-aeb90e95832a\") " pod="kube-system/cilium-69wh4" May 14 00:37:19.920479 kubelet[2019]: I0514 00:37:19.920234 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/700f1054-d2a0-48a6-85f3-aeb90e95832a-lib-modules\") pod \"cilium-69wh4\" (UID: \"700f1054-d2a0-48a6-85f3-aeb90e95832a\") " pod="kube-system/cilium-69wh4" May 14 00:37:19.920479 kubelet[2019]: I0514 00:37:19.920250 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/700f1054-d2a0-48a6-85f3-aeb90e95832a-host-proc-sys-net\") pod \"cilium-69wh4\" (UID: \"700f1054-d2a0-48a6-85f3-aeb90e95832a\") " pod="kube-system/cilium-69wh4" May 14 00:37:19.920479 kubelet[2019]: I0514 00:37:19.920266 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/700f1054-d2a0-48a6-85f3-aeb90e95832a-cilium-cgroup\") pod \"cilium-69wh4\" (UID: \"700f1054-d2a0-48a6-85f3-aeb90e95832a\") " pod="kube-system/cilium-69wh4" May 14 00:37:20.139732 kubelet[2019]: E0514 00:37:20.139044 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:20.140161 env[1216]: time="2025-05-14T00:37:20.140103763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6w7xv,Uid:c66dccec-ada8-4619-a5aa-b2b60b314eeb,Namespace:kube-system,Attempt:0,}" May 14 00:37:20.143856 kubelet[2019]: E0514 00:37:20.143819 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:20.144580 env[1216]: time="2025-05-14T00:37:20.144257080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-69wh4,Uid:700f1054-d2a0-48a6-85f3-aeb90e95832a,Namespace:kube-system,Attempt:0,}" May 14 00:37:20.161041 env[1216]: time="2025-05-14T00:37:20.160611350Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:37:20.161041 env[1216]: time="2025-05-14T00:37:20.160699188Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:37:20.161041 env[1216]: time="2025-05-14T00:37:20.160726707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:37:20.161041 env[1216]: time="2025-05-14T00:37:20.160998222Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/af4d26faaa5a6fa2e1380071edc619a4190db09040c243c2dd591391f67268ef pid=2129 runtime=io.containerd.runc.v2 May 14 00:37:20.163280 env[1216]: time="2025-05-14T00:37:20.163214897Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:37:20.163394 env[1216]: time="2025-05-14T00:37:20.163259496Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:37:20.163495 env[1216]: time="2025-05-14T00:37:20.163385774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:37:20.163732 env[1216]: time="2025-05-14T00:37:20.163698727Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/611ae07fac56bdc83a047d08d0a1efa36d0067a818b3a5569e0526254ec0673e pid=2145 runtime=io.containerd.runc.v2 May 14 00:37:20.172020 systemd[1]: Started cri-containerd-af4d26faaa5a6fa2e1380071edc619a4190db09040c243c2dd591391f67268ef.scope. May 14 00:37:20.180931 kubelet[2019]: E0514 00:37:20.180328 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:20.180713 systemd[1]: Started cri-containerd-611ae07fac56bdc83a047d08d0a1efa36d0067a818b3a5569e0526254ec0673e.scope. May 14 00:37:20.182154 env[1216]: time="2025-05-14T00:37:20.182112716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-s5b8m,Uid:efd62912-b8d5-4e30-bff1-ff187229b969,Namespace:kube-system,Attempt:0,}" May 14 00:37:20.211322 env[1216]: time="2025-05-14T00:37:20.211257808Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:37:20.211579 env[1216]: time="2025-05-14T00:37:20.211296487Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:37:20.211579 env[1216]: time="2025-05-14T00:37:20.211311887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:37:20.211579 env[1216]: time="2025-05-14T00:37:20.211434444Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dd6ec7483da1464e1be16470197e1d5c921d30f60d4dac3fadb5e5c17fedc739 pid=2194 runtime=io.containerd.runc.v2 May 14 00:37:20.221108 env[1216]: time="2025-05-14T00:37:20.220596379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6w7xv,Uid:c66dccec-ada8-4619-a5aa-b2b60b314eeb,Namespace:kube-system,Attempt:0,} returns sandbox id \"af4d26faaa5a6fa2e1380071edc619a4190db09040c243c2dd591391f67268ef\"" May 14 00:37:20.222942 kubelet[2019]: E0514 00:37:20.222673 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:20.224020 systemd[1]: Started cri-containerd-dd6ec7483da1464e1be16470197e1d5c921d30f60d4dac3fadb5e5c17fedc739.scope. May 14 00:37:20.230693 env[1216]: time="2025-05-14T00:37:20.230653656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-69wh4,Uid:700f1054-d2a0-48a6-85f3-aeb90e95832a,Namespace:kube-system,Attempt:0,} returns sandbox id \"611ae07fac56bdc83a047d08d0a1efa36d0067a818b3a5569e0526254ec0673e\"" May 14 00:37:20.230916 env[1216]: time="2025-05-14T00:37:20.230729375Z" level=info msg="CreateContainer within sandbox \"af4d26faaa5a6fa2e1380071edc619a4190db09040c243c2dd591391f67268ef\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 14 00:37:20.231839 kubelet[2019]: E0514 00:37:20.231614 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:20.232906 env[1216]: time="2025-05-14T00:37:20.232857852Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 14 00:37:20.254165 env[1216]: time="2025-05-14T00:37:20.252459177Z" level=info msg="CreateContainer within sandbox \"af4d26faaa5a6fa2e1380071edc619a4190db09040c243c2dd591391f67268ef\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"08d5575b70bfeca0740d673ccce5059f8c7b0b9f91afa516624e7bbc33894405\"" May 14 00:37:20.254165 env[1216]: time="2025-05-14T00:37:20.253120003Z" level=info msg="StartContainer for \"08d5575b70bfeca0740d673ccce5059f8c7b0b9f91afa516624e7bbc33894405\"" May 14 00:37:20.270282 systemd[1]: Started cri-containerd-08d5575b70bfeca0740d673ccce5059f8c7b0b9f91afa516624e7bbc33894405.scope. May 14 00:37:20.277179 env[1216]: time="2025-05-14T00:37:20.277078280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-s5b8m,Uid:efd62912-b8d5-4e30-bff1-ff187229b969,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd6ec7483da1464e1be16470197e1d5c921d30f60d4dac3fadb5e5c17fedc739\"" May 14 00:37:20.277797 kubelet[2019]: E0514 00:37:20.277676 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:20.314934 env[1216]: time="2025-05-14T00:37:20.314868397Z" level=info msg="StartContainer for \"08d5575b70bfeca0740d673ccce5059f8c7b0b9f91afa516624e7bbc33894405\" returns successfully" May 14 00:37:20.863526 kubelet[2019]: E0514 00:37:20.863492 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:20.872395 kubelet[2019]: I0514 00:37:20.872334 2019 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6w7xv" podStartSLOduration=1.87231835 podStartE2EDuration="1.87231835s" podCreationTimestamp="2025-05-14 00:37:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:37:20.872226912 +0000 UTC m=+17.123907999" watchObservedRunningTime="2025-05-14 00:37:20.87231835 +0000 UTC m=+17.123999397" May 14 00:37:24.175734 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1061301952.mount: Deactivated successfully. May 14 00:37:26.431048 env[1216]: time="2025-05-14T00:37:26.430995031Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:37:26.432573 env[1216]: time="2025-05-14T00:37:26.432533048Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:37:26.436189 env[1216]: time="2025-05-14T00:37:26.436143393Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:37:26.436546 env[1216]: time="2025-05-14T00:37:26.436508947Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 14 00:37:26.439979 env[1216]: time="2025-05-14T00:37:26.439935255Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 14 00:37:26.440500 env[1216]: time="2025-05-14T00:37:26.440460767Z" level=info msg="CreateContainer within sandbox \"611ae07fac56bdc83a047d08d0a1efa36d0067a818b3a5569e0526254ec0673e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 14 00:37:26.450437 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount564859901.mount: Deactivated successfully. May 14 00:37:26.455329 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3154015836.mount: Deactivated successfully. May 14 00:37:26.456107 env[1216]: time="2025-05-14T00:37:26.456059730Z" level=info msg="CreateContainer within sandbox \"611ae07fac56bdc83a047d08d0a1efa36d0067a818b3a5569e0526254ec0673e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1c1d28bed8ae0bdd5af6b8eb8a69908476e983aa834a41c81afdb7f91a3aa02f\"" May 14 00:37:26.457122 env[1216]: time="2025-05-14T00:37:26.457093074Z" level=info msg="StartContainer for \"1c1d28bed8ae0bdd5af6b8eb8a69908476e983aa834a41c81afdb7f91a3aa02f\"" May 14 00:37:26.473699 systemd[1]: Started cri-containerd-1c1d28bed8ae0bdd5af6b8eb8a69908476e983aa834a41c81afdb7f91a3aa02f.scope. May 14 00:37:26.529218 env[1216]: time="2025-05-14T00:37:26.527428283Z" level=info msg="StartContainer for \"1c1d28bed8ae0bdd5af6b8eb8a69908476e983aa834a41c81afdb7f91a3aa02f\" returns successfully" May 14 00:37:26.568820 systemd[1]: cri-containerd-1c1d28bed8ae0bdd5af6b8eb8a69908476e983aa834a41c81afdb7f91a3aa02f.scope: Deactivated successfully. May 14 00:37:26.674502 env[1216]: time="2025-05-14T00:37:26.674456604Z" level=info msg="shim disconnected" id=1c1d28bed8ae0bdd5af6b8eb8a69908476e983aa834a41c81afdb7f91a3aa02f May 14 00:37:26.674815 env[1216]: time="2025-05-14T00:37:26.674791799Z" level=warning msg="cleaning up after shim disconnected" id=1c1d28bed8ae0bdd5af6b8eb8a69908476e983aa834a41c81afdb7f91a3aa02f namespace=k8s.io May 14 00:37:26.674914 env[1216]: time="2025-05-14T00:37:26.674899238Z" level=info msg="cleaning up dead shim" May 14 00:37:26.682144 env[1216]: time="2025-05-14T00:37:26.682043249Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:37:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2449 runtime=io.containerd.runc.v2\n" May 14 00:37:26.874223 kubelet[2019]: E0514 00:37:26.874169 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:26.881182 env[1216]: time="2025-05-14T00:37:26.881137458Z" level=info msg="CreateContainer within sandbox \"611ae07fac56bdc83a047d08d0a1efa36d0067a818b3a5569e0526254ec0673e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 14 00:37:26.892039 env[1216]: time="2025-05-14T00:37:26.891990812Z" level=info msg="CreateContainer within sandbox \"611ae07fac56bdc83a047d08d0a1efa36d0067a818b3a5569e0526254ec0673e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f09ae7a653338b464c8373e39977932bcdbf70242568ce6a35caf027909fe540\"" May 14 00:37:26.892669 env[1216]: time="2025-05-14T00:37:26.892640722Z" level=info msg="StartContainer for \"f09ae7a653338b464c8373e39977932bcdbf70242568ce6a35caf027909fe540\"" May 14 00:37:26.917379 systemd[1]: Started cri-containerd-f09ae7a653338b464c8373e39977932bcdbf70242568ce6a35caf027909fe540.scope. May 14 00:37:26.986733 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 00:37:26.987341 systemd[1]: Stopped systemd-sysctl.service. May 14 00:37:26.987496 systemd[1]: Stopping systemd-sysctl.service... May 14 00:37:26.988976 systemd[1]: Starting systemd-sysctl.service... May 14 00:37:26.993272 systemd[1]: cri-containerd-f09ae7a653338b464c8373e39977932bcdbf70242568ce6a35caf027909fe540.scope: Deactivated successfully. May 14 00:37:26.999027 systemd[1]: Finished systemd-sysctl.service. May 14 00:37:27.009046 env[1216]: time="2025-05-14T00:37:27.008993516Z" level=info msg="StartContainer for \"f09ae7a653338b464c8373e39977932bcdbf70242568ce6a35caf027909fe540\" returns successfully" May 14 00:37:27.055462 env[1216]: time="2025-05-14T00:37:27.055414759Z" level=info msg="shim disconnected" id=f09ae7a653338b464c8373e39977932bcdbf70242568ce6a35caf027909fe540 May 14 00:37:27.055462 env[1216]: time="2025-05-14T00:37:27.055458919Z" level=warning msg="cleaning up after shim disconnected" id=f09ae7a653338b464c8373e39977932bcdbf70242568ce6a35caf027909fe540 namespace=k8s.io May 14 00:37:27.055462 env[1216]: time="2025-05-14T00:37:27.055469358Z" level=info msg="cleaning up dead shim" May 14 00:37:27.062395 env[1216]: time="2025-05-14T00:37:27.062341618Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:37:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2513 runtime=io.containerd.runc.v2\n" May 14 00:37:27.448427 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c1d28bed8ae0bdd5af6b8eb8a69908476e983aa834a41c81afdb7f91a3aa02f-rootfs.mount: Deactivated successfully. May 14 00:37:27.875736 kubelet[2019]: E0514 00:37:27.875702 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:27.877782 env[1216]: time="2025-05-14T00:37:27.877742777Z" level=info msg="CreateContainer within sandbox \"611ae07fac56bdc83a047d08d0a1efa36d0067a818b3a5569e0526254ec0673e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 14 00:37:27.893927 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2388999514.mount: Deactivated successfully. May 14 00:37:27.896972 env[1216]: time="2025-05-14T00:37:27.896925977Z" level=info msg="CreateContainer within sandbox \"611ae07fac56bdc83a047d08d0a1efa36d0067a818b3a5569e0526254ec0673e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bc03870cbc22ab615da01f34139a0e4f8a6eb1fbfc4529b19e4b5ad73d5821cf\"" May 14 00:37:27.897606 env[1216]: time="2025-05-14T00:37:27.897579728Z" level=info msg="StartContainer for \"bc03870cbc22ab615da01f34139a0e4f8a6eb1fbfc4529b19e4b5ad73d5821cf\"" May 14 00:37:27.945239 systemd[1]: Started cri-containerd-bc03870cbc22ab615da01f34139a0e4f8a6eb1fbfc4529b19e4b5ad73d5821cf.scope. May 14 00:37:28.007165 env[1216]: time="2025-05-14T00:37:28.006871419Z" level=info msg="StartContainer for \"bc03870cbc22ab615da01f34139a0e4f8a6eb1fbfc4529b19e4b5ad73d5821cf\" returns successfully" May 14 00:37:28.014732 systemd[1]: cri-containerd-bc03870cbc22ab615da01f34139a0e4f8a6eb1fbfc4529b19e4b5ad73d5821cf.scope: Deactivated successfully. May 14 00:37:28.035394 env[1216]: time="2025-05-14T00:37:28.035330661Z" level=info msg="shim disconnected" id=bc03870cbc22ab615da01f34139a0e4f8a6eb1fbfc4529b19e4b5ad73d5821cf May 14 00:37:28.035394 env[1216]: time="2025-05-14T00:37:28.035379301Z" level=warning msg="cleaning up after shim disconnected" id=bc03870cbc22ab615da01f34139a0e4f8a6eb1fbfc4529b19e4b5ad73d5821cf namespace=k8s.io May 14 00:37:28.035394 env[1216]: time="2025-05-14T00:37:28.035389061Z" level=info msg="cleaning up dead shim" May 14 00:37:28.042235 env[1216]: time="2025-05-14T00:37:28.042180726Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:37:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2569 runtime=io.containerd.runc.v2\n" May 14 00:37:28.274191 systemd[1]: Started sshd@5-10.0.0.47:22-10.0.0.1:37224.service. May 14 00:37:28.312287 sshd[2582]: Accepted publickey for core from 10.0.0.1 port 37224 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:37:28.313599 sshd[2582]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:37:28.316835 systemd-logind[1203]: New session 6 of user core. May 14 00:37:28.317645 systemd[1]: Started session-6.scope. May 14 00:37:28.444147 sshd[2582]: pam_unix(sshd:session): session closed for user core May 14 00:37:28.446156 env[1216]: time="2025-05-14T00:37:28.446110608Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:37:28.448080 systemd[1]: run-containerd-runc-k8s.io-bc03870cbc22ab615da01f34139a0e4f8a6eb1fbfc4529b19e4b5ad73d5821cf-runc.EQUw3t.mount: Deactivated successfully. May 14 00:37:28.448180 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc03870cbc22ab615da01f34139a0e4f8a6eb1fbfc4529b19e4b5ad73d5821cf-rootfs.mount: Deactivated successfully. May 14 00:37:28.448863 env[1216]: time="2025-05-14T00:37:28.448743851Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:37:28.449103 systemd[1]: sshd@5-10.0.0.47:22-10.0.0.1:37224.service: Deactivated successfully. May 14 00:37:28.449666 systemd[1]: session-6.scope: Deactivated successfully. May 14 00:37:28.450456 systemd-logind[1203]: Session 6 logged out. Waiting for processes to exit. May 14 00:37:28.452594 systemd-logind[1203]: Removed session 6. May 14 00:37:28.455676 env[1216]: time="2025-05-14T00:37:28.455643515Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:37:28.456087 env[1216]: time="2025-05-14T00:37:28.456061229Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 14 00:37:28.458788 env[1216]: time="2025-05-14T00:37:28.458406996Z" level=info msg="CreateContainer within sandbox \"dd6ec7483da1464e1be16470197e1d5c921d30f60d4dac3fadb5e5c17fedc739\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 14 00:37:28.467921 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3500266680.mount: Deactivated successfully. May 14 00:37:28.470547 env[1216]: time="2025-05-14T00:37:28.470502347Z" level=info msg="CreateContainer within sandbox \"dd6ec7483da1464e1be16470197e1d5c921d30f60d4dac3fadb5e5c17fedc739\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"fc1ebf7d6a8d5cb2f32ffe9101c8f1738743e4cea13f6ac915aaca0849d3bd83\"" May 14 00:37:28.471146 env[1216]: time="2025-05-14T00:37:28.471116379Z" level=info msg="StartContainer for \"fc1ebf7d6a8d5cb2f32ffe9101c8f1738743e4cea13f6ac915aaca0849d3bd83\"" May 14 00:37:28.485466 systemd[1]: Started cri-containerd-fc1ebf7d6a8d5cb2f32ffe9101c8f1738743e4cea13f6ac915aaca0849d3bd83.scope. May 14 00:37:28.552290 env[1216]: time="2025-05-14T00:37:28.552172847Z" level=info msg="StartContainer for \"fc1ebf7d6a8d5cb2f32ffe9101c8f1738743e4cea13f6ac915aaca0849d3bd83\" returns successfully" May 14 00:37:28.878292 kubelet[2019]: E0514 00:37:28.878253 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:28.881320 kubelet[2019]: E0514 00:37:28.881297 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:28.883433 env[1216]: time="2025-05-14T00:37:28.883393064Z" level=info msg="CreateContainer within sandbox \"611ae07fac56bdc83a047d08d0a1efa36d0067a818b3a5569e0526254ec0673e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 14 00:37:28.896016 env[1216]: time="2025-05-14T00:37:28.895966729Z" level=info msg="CreateContainer within sandbox \"611ae07fac56bdc83a047d08d0a1efa36d0067a818b3a5569e0526254ec0673e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f243892b495601f7f29ecbec3262922f40c491d91d5b52fea0b1bde06cc8f28f\"" May 14 00:37:28.896434 env[1216]: time="2025-05-14T00:37:28.896395603Z" level=info msg="StartContainer for \"f243892b495601f7f29ecbec3262922f40c491d91d5b52fea0b1bde06cc8f28f\"" May 14 00:37:28.919806 systemd[1]: Started cri-containerd-f243892b495601f7f29ecbec3262922f40c491d91d5b52fea0b1bde06cc8f28f.scope. May 14 00:37:28.922173 kubelet[2019]: I0514 00:37:28.922128 2019 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-s5b8m" podStartSLOduration=1.744821096 podStartE2EDuration="9.922106044s" podCreationTimestamp="2025-05-14 00:37:19 +0000 UTC" firstStartedPulling="2025-05-14 00:37:20.279614109 +0000 UTC m=+16.531295196" lastFinishedPulling="2025-05-14 00:37:28.456899097 +0000 UTC m=+24.708580144" observedRunningTime="2025-05-14 00:37:28.89514678 +0000 UTC m=+25.146827867" watchObservedRunningTime="2025-05-14 00:37:28.922106044 +0000 UTC m=+25.173787131" May 14 00:37:28.997640 systemd[1]: cri-containerd-f243892b495601f7f29ecbec3262922f40c491d91d5b52fea0b1bde06cc8f28f.scope: Deactivated successfully. May 14 00:37:29.012524 env[1216]: time="2025-05-14T00:37:29.012443029Z" level=info msg="StartContainer for \"f243892b495601f7f29ecbec3262922f40c491d91d5b52fea0b1bde06cc8f28f\" returns successfully" May 14 00:37:29.035264 env[1216]: time="2025-05-14T00:37:29.035215804Z" level=info msg="shim disconnected" id=f243892b495601f7f29ecbec3262922f40c491d91d5b52fea0b1bde06cc8f28f May 14 00:37:29.035532 env[1216]: time="2025-05-14T00:37:29.035512360Z" level=warning msg="cleaning up after shim disconnected" id=f243892b495601f7f29ecbec3262922f40c491d91d5b52fea0b1bde06cc8f28f namespace=k8s.io May 14 00:37:29.035613 env[1216]: time="2025-05-14T00:37:29.035595759Z" level=info msg="cleaning up dead shim" May 14 00:37:29.043998 env[1216]: time="2025-05-14T00:37:29.043963167Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:37:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2676 runtime=io.containerd.runc.v2\n" May 14 00:37:29.885148 kubelet[2019]: E0514 00:37:29.884989 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:29.885148 kubelet[2019]: E0514 00:37:29.885039 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:29.887445 env[1216]: time="2025-05-14T00:37:29.887408080Z" level=info msg="CreateContainer within sandbox \"611ae07fac56bdc83a047d08d0a1efa36d0067a818b3a5569e0526254ec0673e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 14 00:37:29.900770 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1211437350.mount: Deactivated successfully. May 14 00:37:29.905863 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3974515938.mount: Deactivated successfully. May 14 00:37:29.910467 env[1216]: time="2025-05-14T00:37:29.909948818Z" level=info msg="CreateContainer within sandbox \"611ae07fac56bdc83a047d08d0a1efa36d0067a818b3a5569e0526254ec0673e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3bce79c7a93280d692259fa11b20cdb22e0148ac655e27300d79a5279fae7673\"" May 14 00:37:29.910859 env[1216]: time="2025-05-14T00:37:29.910826806Z" level=info msg="StartContainer for \"3bce79c7a93280d692259fa11b20cdb22e0148ac655e27300d79a5279fae7673\"" May 14 00:37:29.926127 systemd[1]: Started cri-containerd-3bce79c7a93280d692259fa11b20cdb22e0148ac655e27300d79a5279fae7673.scope. May 14 00:37:29.987730 env[1216]: time="2025-05-14T00:37:29.987683698Z" level=info msg="StartContainer for \"3bce79c7a93280d692259fa11b20cdb22e0148ac655e27300d79a5279fae7673\" returns successfully" May 14 00:37:30.117543 kubelet[2019]: I0514 00:37:30.117506 2019 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 14 00:37:30.142629 kubelet[2019]: I0514 00:37:30.142520 2019 topology_manager.go:215] "Topology Admit Handler" podUID="8f1d3147-b5cc-4654-b923-f85447143af6" podNamespace="kube-system" podName="coredns-7db6d8ff4d-hdg8n" May 14 00:37:30.143603 kubelet[2019]: I0514 00:37:30.143572 2019 topology_manager.go:215] "Topology Admit Handler" podUID="dab7c2a4-3672-423b-ae4d-564e6e0c16ea" podNamespace="kube-system" podName="coredns-7db6d8ff4d-jlfg4" May 14 00:37:30.152668 systemd[1]: Created slice kubepods-burstable-pod8f1d3147_b5cc_4654_b923_f85447143af6.slice. May 14 00:37:30.156533 systemd[1]: Created slice kubepods-burstable-poddab7c2a4_3672_423b_ae4d_564e6e0c16ea.slice. May 14 00:37:30.254405 kubelet[2019]: I0514 00:37:30.254364 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlphp\" (UniqueName: \"kubernetes.io/projected/dab7c2a4-3672-423b-ae4d-564e6e0c16ea-kube-api-access-mlphp\") pod \"coredns-7db6d8ff4d-jlfg4\" (UID: \"dab7c2a4-3672-423b-ae4d-564e6e0c16ea\") " pod="kube-system/coredns-7db6d8ff4d-jlfg4" May 14 00:37:30.254627 kubelet[2019]: I0514 00:37:30.254607 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lm8n\" (UniqueName: \"kubernetes.io/projected/8f1d3147-b5cc-4654-b923-f85447143af6-kube-api-access-9lm8n\") pod \"coredns-7db6d8ff4d-hdg8n\" (UID: \"8f1d3147-b5cc-4654-b923-f85447143af6\") " pod="kube-system/coredns-7db6d8ff4d-hdg8n" May 14 00:37:30.254745 kubelet[2019]: I0514 00:37:30.254730 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dab7c2a4-3672-423b-ae4d-564e6e0c16ea-config-volume\") pod \"coredns-7db6d8ff4d-jlfg4\" (UID: \"dab7c2a4-3672-423b-ae4d-564e6e0c16ea\") " pod="kube-system/coredns-7db6d8ff4d-jlfg4" May 14 00:37:30.254830 kubelet[2019]: I0514 00:37:30.254816 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f1d3147-b5cc-4654-b923-f85447143af6-config-volume\") pod \"coredns-7db6d8ff4d-hdg8n\" (UID: \"8f1d3147-b5cc-4654-b923-f85447143af6\") " pod="kube-system/coredns-7db6d8ff4d-hdg8n" May 14 00:37:30.275901 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! May 14 00:37:30.456452 kubelet[2019]: E0514 00:37:30.456353 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:30.457122 env[1216]: time="2025-05-14T00:37:30.457081941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hdg8n,Uid:8f1d3147-b5cc-4654-b923-f85447143af6,Namespace:kube-system,Attempt:0,}" May 14 00:37:30.459692 kubelet[2019]: E0514 00:37:30.459629 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:30.460383 env[1216]: time="2025-05-14T00:37:30.460341099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jlfg4,Uid:dab7c2a4-3672-423b-ae4d-564e6e0c16ea,Namespace:kube-system,Attempt:0,}" May 14 00:37:30.548901 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! May 14 00:37:30.890051 kubelet[2019]: E0514 00:37:30.890015 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:31.893187 kubelet[2019]: E0514 00:37:31.891423 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:32.159633 systemd-networkd[1055]: cilium_host: Link UP May 14 00:37:32.159843 systemd-networkd[1055]: cilium_net: Link UP May 14 00:37:32.159847 systemd-networkd[1055]: cilium_net: Gained carrier May 14 00:37:32.160560 systemd-networkd[1055]: cilium_host: Gained carrier May 14 00:37:32.162469 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 14 00:37:32.161730 systemd-networkd[1055]: cilium_host: Gained IPv6LL May 14 00:37:32.164005 systemd-networkd[1055]: cilium_net: Gained IPv6LL May 14 00:37:32.242726 systemd-networkd[1055]: cilium_vxlan: Link UP May 14 00:37:32.242732 systemd-networkd[1055]: cilium_vxlan: Gained carrier May 14 00:37:32.552916 kernel: NET: Registered PF_ALG protocol family May 14 00:37:32.893244 kubelet[2019]: E0514 00:37:32.893007 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:33.134267 systemd-networkd[1055]: lxc_health: Link UP May 14 00:37:33.151283 systemd-networkd[1055]: lxc_health: Gained carrier May 14 00:37:33.151900 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 14 00:37:33.448551 systemd[1]: Started sshd@6-10.0.0.47:22-10.0.0.1:54004.service. May 14 00:37:33.489463 sshd[3217]: Accepted publickey for core from 10.0.0.1 port 54004 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:37:33.491324 sshd[3217]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:37:33.495106 systemd-logind[1203]: New session 7 of user core. May 14 00:37:33.495984 systemd[1]: Started session-7.scope. May 14 00:37:33.548613 systemd-networkd[1055]: cilium_vxlan: Gained IPv6LL May 14 00:37:33.558237 systemd-networkd[1055]: lxcca9b2d17e185: Link UP May 14 00:37:33.571907 kernel: eth0: renamed from tmpc4c55 May 14 00:37:33.579890 kernel: eth0: renamed from tmp1c945 May 14 00:37:33.590979 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcdc3814ac717b: link becomes ready May 14 00:37:33.591069 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 14 00:37:33.591087 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcca9b2d17e185: link becomes ready May 14 00:37:33.589320 systemd-networkd[1055]: lxcdc3814ac717b: Link UP May 14 00:37:33.589612 systemd-networkd[1055]: lxcdc3814ac717b: Gained carrier May 14 00:37:33.589742 systemd-networkd[1055]: lxcca9b2d17e185: Gained carrier May 14 00:37:33.642531 sshd[3217]: pam_unix(sshd:session): session closed for user core May 14 00:37:33.645464 systemd[1]: sshd@6-10.0.0.47:22-10.0.0.1:54004.service: Deactivated successfully. May 14 00:37:33.646184 systemd[1]: session-7.scope: Deactivated successfully. May 14 00:37:33.646712 systemd-logind[1203]: Session 7 logged out. Waiting for processes to exit. May 14 00:37:33.647427 systemd-logind[1203]: Removed session 7. May 14 00:37:34.148133 kubelet[2019]: E0514 00:37:34.148066 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:34.167636 kubelet[2019]: I0514 00:37:34.167566 2019 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-69wh4" podStartSLOduration=8.960866932 podStartE2EDuration="15.167548699s" podCreationTimestamp="2025-05-14 00:37:19 +0000 UTC" firstStartedPulling="2025-05-14 00:37:20.232359302 +0000 UTC m=+16.484040389" lastFinishedPulling="2025-05-14 00:37:26.439041069 +0000 UTC m=+22.690722156" observedRunningTime="2025-05-14 00:37:30.903578646 +0000 UTC m=+27.155259693" watchObservedRunningTime="2025-05-14 00:37:34.167548699 +0000 UTC m=+30.419229786" May 14 00:37:34.509162 systemd-networkd[1055]: lxc_health: Gained IPv6LL May 14 00:37:35.276147 systemd-networkd[1055]: lxcdc3814ac717b: Gained IPv6LL May 14 00:37:35.276415 systemd-networkd[1055]: lxcca9b2d17e185: Gained IPv6LL May 14 00:37:37.080549 env[1216]: time="2025-05-14T00:37:37.080484560Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:37:37.080927 env[1216]: time="2025-05-14T00:37:37.080526600Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:37:37.080927 env[1216]: time="2025-05-14T00:37:37.080537320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:37:37.080927 env[1216]: time="2025-05-14T00:37:37.080712158Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1c945f10f9937806d1689aba5545160ce5eb1364d21e69285d3293558cbdc557 pid=3274 runtime=io.containerd.runc.v2 May 14 00:37:37.085152 env[1216]: time="2025-05-14T00:37:37.085094555Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:37:37.085259 env[1216]: time="2025-05-14T00:37:37.085136794Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:37:37.085259 env[1216]: time="2025-05-14T00:37:37.085146914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:37:37.085330 env[1216]: time="2025-05-14T00:37:37.085262633Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c4c555d95003e5be5b606e179be0c45f848ca0bffcdd5ead37a54a1f907882da pid=3282 runtime=io.containerd.runc.v2 May 14 00:37:37.097871 systemd[1]: run-containerd-runc-k8s.io-1c945f10f9937806d1689aba5545160ce5eb1364d21e69285d3293558cbdc557-runc.4dHc0Y.mount: Deactivated successfully. May 14 00:37:37.100379 systemd[1]: Started cri-containerd-1c945f10f9937806d1689aba5545160ce5eb1364d21e69285d3293558cbdc557.scope. May 14 00:37:37.105627 systemd[1]: Started cri-containerd-c4c555d95003e5be5b606e179be0c45f848ca0bffcdd5ead37a54a1f907882da.scope. May 14 00:37:37.143218 systemd-resolved[1156]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 00:37:37.143570 systemd-resolved[1156]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 00:37:37.161475 env[1216]: time="2025-05-14T00:37:37.161415039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jlfg4,Uid:dab7c2a4-3672-423b-ae4d-564e6e0c16ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"c4c555d95003e5be5b606e179be0c45f848ca0bffcdd5ead37a54a1f907882da\"" May 14 00:37:37.162541 kubelet[2019]: E0514 00:37:37.162511 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:37.166087 env[1216]: time="2025-05-14T00:37:37.165972273Z" level=info msg="CreateContainer within sandbox \"c4c555d95003e5be5b606e179be0c45f848ca0bffcdd5ead37a54a1f907882da\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 00:37:37.171030 env[1216]: time="2025-05-14T00:37:37.170981024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hdg8n,Uid:8f1d3147-b5cc-4654-b923-f85447143af6,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c945f10f9937806d1689aba5545160ce5eb1364d21e69285d3293558cbdc557\"" May 14 00:37:37.172535 kubelet[2019]: E0514 00:37:37.172501 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:37.174680 env[1216]: time="2025-05-14T00:37:37.174641428Z" level=info msg="CreateContainer within sandbox \"1c945f10f9937806d1689aba5545160ce5eb1364d21e69285d3293558cbdc557\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 00:37:37.183526 env[1216]: time="2025-05-14T00:37:37.183477500Z" level=info msg="CreateContainer within sandbox \"c4c555d95003e5be5b606e179be0c45f848ca0bffcdd5ead37a54a1f907882da\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1cd1b799e885639c8c077a32596854ce62a1095b6d55276473433b8a6dcba6b6\"" May 14 00:37:37.185036 env[1216]: time="2025-05-14T00:37:37.184974165Z" level=info msg="StartContainer for \"1cd1b799e885639c8c077a32596854ce62a1095b6d55276473433b8a6dcba6b6\"" May 14 00:37:37.190595 env[1216]: time="2025-05-14T00:37:37.190510350Z" level=info msg="CreateContainer within sandbox \"1c945f10f9937806d1689aba5545160ce5eb1364d21e69285d3293558cbdc557\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9bc866ed1732f3c39d021bbaec7b3ccb4764d4e2e77614588e6a955faaff8b57\"" May 14 00:37:37.191026 env[1216]: time="2025-05-14T00:37:37.191002026Z" level=info msg="StartContainer for \"9bc866ed1732f3c39d021bbaec7b3ccb4764d4e2e77614588e6a955faaff8b57\"" May 14 00:37:37.200889 systemd[1]: Started cri-containerd-1cd1b799e885639c8c077a32596854ce62a1095b6d55276473433b8a6dcba6b6.scope. May 14 00:37:37.217791 systemd[1]: Started cri-containerd-9bc866ed1732f3c39d021bbaec7b3ccb4764d4e2e77614588e6a955faaff8b57.scope. May 14 00:37:37.253019 env[1216]: time="2025-05-14T00:37:37.252975412Z" level=info msg="StartContainer for \"1cd1b799e885639c8c077a32596854ce62a1095b6d55276473433b8a6dcba6b6\" returns successfully" May 14 00:37:37.269655 env[1216]: time="2025-05-14T00:37:37.269610127Z" level=info msg="StartContainer for \"9bc866ed1732f3c39d021bbaec7b3ccb4764d4e2e77614588e6a955faaff8b57\" returns successfully" May 14 00:37:37.903630 kubelet[2019]: E0514 00:37:37.903599 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:37.905915 kubelet[2019]: E0514 00:37:37.905851 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:37.912986 kubelet[2019]: I0514 00:37:37.912921 2019 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-jlfg4" podStartSLOduration=18.912904196 podStartE2EDuration="18.912904196s" podCreationTimestamp="2025-05-14 00:37:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:37:37.912643398 +0000 UTC m=+34.164324485" watchObservedRunningTime="2025-05-14 00:37:37.912904196 +0000 UTC m=+34.164585283" May 14 00:37:37.931618 kubelet[2019]: I0514 00:37:37.931560 2019 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-hdg8n" podStartSLOduration=18.931543411 podStartE2EDuration="18.931543411s" podCreationTimestamp="2025-05-14 00:37:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:37:37.922666699 +0000 UTC m=+34.174347746" watchObservedRunningTime="2025-05-14 00:37:37.931543411 +0000 UTC m=+34.183224498" May 14 00:37:38.647156 systemd[1]: Started sshd@7-10.0.0.47:22-10.0.0.1:54018.service. May 14 00:37:38.685241 sshd[3430]: Accepted publickey for core from 10.0.0.1 port 54018 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:37:38.687303 sshd[3430]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:37:38.690848 systemd-logind[1203]: New session 8 of user core. May 14 00:37:38.692268 systemd[1]: Started session-8.scope. May 14 00:37:38.803790 sshd[3430]: pam_unix(sshd:session): session closed for user core May 14 00:37:38.806264 systemd[1]: sshd@7-10.0.0.47:22-10.0.0.1:54018.service: Deactivated successfully. May 14 00:37:38.807049 systemd[1]: session-8.scope: Deactivated successfully. May 14 00:37:38.807532 systemd-logind[1203]: Session 8 logged out. Waiting for processes to exit. May 14 00:37:38.808213 systemd-logind[1203]: Removed session 8. May 14 00:37:38.907501 kubelet[2019]: E0514 00:37:38.907421 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:38.907845 kubelet[2019]: E0514 00:37:38.907816 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:39.909272 kubelet[2019]: E0514 00:37:39.909239 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:39.909681 kubelet[2019]: E0514 00:37:39.909311 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:43.808723 systemd[1]: Started sshd@8-10.0.0.47:22-10.0.0.1:41906.service. May 14 00:37:43.845124 sshd[3446]: Accepted publickey for core from 10.0.0.1 port 41906 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:37:43.846623 sshd[3446]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:37:43.849944 systemd-logind[1203]: New session 9 of user core. May 14 00:37:43.850733 systemd[1]: Started session-9.scope. May 14 00:37:43.964687 sshd[3446]: pam_unix(sshd:session): session closed for user core May 14 00:37:43.967848 systemd[1]: Started sshd@9-10.0.0.47:22-10.0.0.1:41916.service. May 14 00:37:43.968587 systemd-logind[1203]: Session 9 logged out. Waiting for processes to exit. May 14 00:37:43.968795 systemd[1]: sshd@8-10.0.0.47:22-10.0.0.1:41906.service: Deactivated successfully. May 14 00:37:43.969503 systemd[1]: session-9.scope: Deactivated successfully. May 14 00:37:43.970095 systemd-logind[1203]: Removed session 9. May 14 00:37:44.003903 sshd[3460]: Accepted publickey for core from 10.0.0.1 port 41916 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:37:44.005306 sshd[3460]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:37:44.008261 systemd-logind[1203]: New session 10 of user core. May 14 00:37:44.009114 systemd[1]: Started session-10.scope. May 14 00:37:44.154696 sshd[3460]: pam_unix(sshd:session): session closed for user core May 14 00:37:44.158985 systemd[1]: Started sshd@10-10.0.0.47:22-10.0.0.1:41926.service. May 14 00:37:44.162743 systemd[1]: sshd@9-10.0.0.47:22-10.0.0.1:41916.service: Deactivated successfully. May 14 00:37:44.164220 systemd[1]: session-10.scope: Deactivated successfully. May 14 00:37:44.165826 systemd-logind[1203]: Session 10 logged out. Waiting for processes to exit. May 14 00:37:44.167524 systemd-logind[1203]: Removed session 10. May 14 00:37:44.202683 sshd[3471]: Accepted publickey for core from 10.0.0.1 port 41926 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:37:44.203938 sshd[3471]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:37:44.207320 systemd-logind[1203]: New session 11 of user core. May 14 00:37:44.208210 systemd[1]: Started session-11.scope. May 14 00:37:44.319173 sshd[3471]: pam_unix(sshd:session): session closed for user core May 14 00:37:44.321861 systemd-logind[1203]: Session 11 logged out. Waiting for processes to exit. May 14 00:37:44.322100 systemd[1]: sshd@10-10.0.0.47:22-10.0.0.1:41926.service: Deactivated successfully. May 14 00:37:44.322824 systemd[1]: session-11.scope: Deactivated successfully. May 14 00:37:44.323399 systemd-logind[1203]: Removed session 11. May 14 00:37:48.609978 kubelet[2019]: I0514 00:37:48.609943 2019 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 00:37:48.611205 kubelet[2019]: E0514 00:37:48.611182 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:48.923895 kubelet[2019]: E0514 00:37:48.923583 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:49.324218 systemd[1]: Started sshd@11-10.0.0.47:22-10.0.0.1:41940.service. May 14 00:37:49.359844 sshd[3486]: Accepted publickey for core from 10.0.0.1 port 41940 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:37:49.361330 sshd[3486]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:37:49.364766 systemd-logind[1203]: New session 12 of user core. May 14 00:37:49.365580 systemd[1]: Started session-12.scope. May 14 00:37:49.469679 sshd[3486]: pam_unix(sshd:session): session closed for user core May 14 00:37:49.472103 systemd[1]: sshd@11-10.0.0.47:22-10.0.0.1:41940.service: Deactivated successfully. May 14 00:37:49.472909 systemd[1]: session-12.scope: Deactivated successfully. May 14 00:37:49.473406 systemd-logind[1203]: Session 12 logged out. Waiting for processes to exit. May 14 00:37:49.474099 systemd-logind[1203]: Removed session 12. May 14 00:37:54.474453 systemd[1]: Started sshd@12-10.0.0.47:22-10.0.0.1:43816.service. May 14 00:37:54.510195 sshd[3501]: Accepted publickey for core from 10.0.0.1 port 43816 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:37:54.511537 sshd[3501]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:37:54.515040 systemd-logind[1203]: New session 13 of user core. May 14 00:37:54.515383 systemd[1]: Started session-13.scope. May 14 00:37:54.622203 sshd[3501]: pam_unix(sshd:session): session closed for user core May 14 00:37:54.625085 systemd[1]: Started sshd@13-10.0.0.47:22-10.0.0.1:43828.service. May 14 00:37:54.626034 systemd[1]: sshd@12-10.0.0.47:22-10.0.0.1:43816.service: Deactivated successfully. May 14 00:37:54.626803 systemd[1]: session-13.scope: Deactivated successfully. May 14 00:37:54.627405 systemd-logind[1203]: Session 13 logged out. Waiting for processes to exit. May 14 00:37:54.628236 systemd-logind[1203]: Removed session 13. May 14 00:37:54.664418 sshd[3514]: Accepted publickey for core from 10.0.0.1 port 43828 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:37:54.665761 sshd[3514]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:37:54.668872 systemd-logind[1203]: New session 14 of user core. May 14 00:37:54.669625 systemd[1]: Started session-14.scope. May 14 00:37:54.869416 sshd[3514]: pam_unix(sshd:session): session closed for user core May 14 00:37:54.872843 systemd[1]: Started sshd@14-10.0.0.47:22-10.0.0.1:43844.service. May 14 00:37:54.873280 systemd[1]: sshd@13-10.0.0.47:22-10.0.0.1:43828.service: Deactivated successfully. May 14 00:37:54.874186 systemd[1]: session-14.scope: Deactivated successfully. May 14 00:37:54.874205 systemd-logind[1203]: Session 14 logged out. Waiting for processes to exit. May 14 00:37:54.875193 systemd-logind[1203]: Removed session 14. May 14 00:37:54.908449 sshd[3525]: Accepted publickey for core from 10.0.0.1 port 43844 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:37:54.910023 sshd[3525]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:37:54.913591 systemd-logind[1203]: New session 15 of user core. May 14 00:37:54.914073 systemd[1]: Started session-15.scope. May 14 00:37:56.152012 sshd[3525]: pam_unix(sshd:session): session closed for user core May 14 00:37:56.154626 systemd[1]: sshd@14-10.0.0.47:22-10.0.0.1:43844.service: Deactivated successfully. May 14 00:37:56.155210 systemd[1]: session-15.scope: Deactivated successfully. May 14 00:37:56.155721 systemd-logind[1203]: Session 15 logged out. Waiting for processes to exit. May 14 00:37:56.156866 systemd[1]: Started sshd@15-10.0.0.47:22-10.0.0.1:43846.service. May 14 00:37:56.158580 systemd-logind[1203]: Removed session 15. May 14 00:37:56.197462 sshd[3546]: Accepted publickey for core from 10.0.0.1 port 43846 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:37:56.199248 sshd[3546]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:37:56.202536 systemd-logind[1203]: New session 16 of user core. May 14 00:37:56.203407 systemd[1]: Started session-16.scope. May 14 00:37:56.409106 sshd[3546]: pam_unix(sshd:session): session closed for user core May 14 00:37:56.413871 systemd[1]: sshd@15-10.0.0.47:22-10.0.0.1:43846.service: Deactivated successfully. May 14 00:37:56.414489 systemd[1]: session-16.scope: Deactivated successfully. May 14 00:37:56.415164 systemd-logind[1203]: Session 16 logged out. Waiting for processes to exit. May 14 00:37:56.416571 systemd[1]: Started sshd@16-10.0.0.47:22-10.0.0.1:43860.service. May 14 00:37:56.417418 systemd-logind[1203]: Removed session 16. May 14 00:37:56.452516 sshd[3560]: Accepted publickey for core from 10.0.0.1 port 43860 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:37:56.453725 sshd[3560]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:37:56.457045 systemd-logind[1203]: New session 17 of user core. May 14 00:37:56.457900 systemd[1]: Started session-17.scope. May 14 00:37:56.564535 sshd[3560]: pam_unix(sshd:session): session closed for user core May 14 00:37:56.566771 systemd[1]: sshd@16-10.0.0.47:22-10.0.0.1:43860.service: Deactivated successfully. May 14 00:37:56.567523 systemd[1]: session-17.scope: Deactivated successfully. May 14 00:37:56.568082 systemd-logind[1203]: Session 17 logged out. Waiting for processes to exit. May 14 00:37:56.568787 systemd-logind[1203]: Removed session 17. May 14 00:38:01.569017 systemd[1]: Started sshd@17-10.0.0.47:22-10.0.0.1:43870.service. May 14 00:38:01.607468 sshd[3576]: Accepted publickey for core from 10.0.0.1 port 43870 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:38:01.608996 sshd[3576]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:38:01.612311 systemd-logind[1203]: New session 18 of user core. May 14 00:38:01.613206 systemd[1]: Started session-18.scope. May 14 00:38:01.715642 sshd[3576]: pam_unix(sshd:session): session closed for user core May 14 00:38:01.717847 systemd[1]: sshd@17-10.0.0.47:22-10.0.0.1:43870.service: Deactivated successfully. May 14 00:38:01.718576 systemd[1]: session-18.scope: Deactivated successfully. May 14 00:38:01.719141 systemd-logind[1203]: Session 18 logged out. Waiting for processes to exit. May 14 00:38:01.719847 systemd-logind[1203]: Removed session 18. May 14 00:38:06.719862 systemd[1]: Started sshd@18-10.0.0.47:22-10.0.0.1:52500.service. May 14 00:38:06.756164 sshd[3591]: Accepted publickey for core from 10.0.0.1 port 52500 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:38:06.758167 sshd[3591]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:38:06.761834 systemd-logind[1203]: New session 19 of user core. May 14 00:38:06.762291 systemd[1]: Started session-19.scope. May 14 00:38:06.867219 sshd[3591]: pam_unix(sshd:session): session closed for user core May 14 00:38:06.869511 systemd[1]: sshd@18-10.0.0.47:22-10.0.0.1:52500.service: Deactivated successfully. May 14 00:38:06.870277 systemd[1]: session-19.scope: Deactivated successfully. May 14 00:38:06.870781 systemd-logind[1203]: Session 19 logged out. Waiting for processes to exit. May 14 00:38:06.871443 systemd-logind[1203]: Removed session 19. May 14 00:38:11.825851 kubelet[2019]: E0514 00:38:11.825815 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:38:11.871927 systemd[1]: Started sshd@19-10.0.0.47:22-10.0.0.1:52512.service. May 14 00:38:11.907782 sshd[3604]: Accepted publickey for core from 10.0.0.1 port 52512 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:38:11.909420 sshd[3604]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:38:11.912532 systemd-logind[1203]: New session 20 of user core. May 14 00:38:11.913391 systemd[1]: Started session-20.scope. May 14 00:38:12.019294 sshd[3604]: pam_unix(sshd:session): session closed for user core May 14 00:38:12.021637 systemd[1]: sshd@19-10.0.0.47:22-10.0.0.1:52512.service: Deactivated successfully. May 14 00:38:12.022378 systemd[1]: session-20.scope: Deactivated successfully. May 14 00:38:12.022891 systemd-logind[1203]: Session 20 logged out. Waiting for processes to exit. May 14 00:38:12.023586 systemd-logind[1203]: Removed session 20. May 14 00:38:17.023708 systemd[1]: Started sshd@20-10.0.0.47:22-10.0.0.1:36498.service. May 14 00:38:17.059869 sshd[3617]: Accepted publickey for core from 10.0.0.1 port 36498 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:38:17.061361 sshd[3617]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:38:17.064575 systemd-logind[1203]: New session 21 of user core. May 14 00:38:17.065131 systemd[1]: Started session-21.scope. May 14 00:38:17.167031 sshd[3617]: pam_unix(sshd:session): session closed for user core May 14 00:38:17.169795 systemd[1]: sshd@20-10.0.0.47:22-10.0.0.1:36498.service: Deactivated successfully. May 14 00:38:17.170415 systemd[1]: session-21.scope: Deactivated successfully. May 14 00:38:17.170920 systemd-logind[1203]: Session 21 logged out. Waiting for processes to exit. May 14 00:38:17.172091 systemd[1]: Started sshd@21-10.0.0.47:22-10.0.0.1:36504.service. May 14 00:38:17.172708 systemd-logind[1203]: Removed session 21. May 14 00:38:17.207960 sshd[3630]: Accepted publickey for core from 10.0.0.1 port 36504 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:38:17.209091 sshd[3630]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:38:17.212234 systemd-logind[1203]: New session 22 of user core. May 14 00:38:17.213158 systemd[1]: Started session-22.scope. May 14 00:38:19.174895 env[1216]: time="2025-05-14T00:38:19.174836504Z" level=info msg="StopContainer for \"fc1ebf7d6a8d5cb2f32ffe9101c8f1738743e4cea13f6ac915aaca0849d3bd83\" with timeout 30 (s)" May 14 00:38:19.176647 env[1216]: time="2025-05-14T00:38:19.176604256Z" level=info msg="Stop container \"fc1ebf7d6a8d5cb2f32ffe9101c8f1738743e4cea13f6ac915aaca0849d3bd83\" with signal terminated" May 14 00:38:19.187532 systemd[1]: cri-containerd-fc1ebf7d6a8d5cb2f32ffe9101c8f1738743e4cea13f6ac915aaca0849d3bd83.scope: Deactivated successfully. May 14 00:38:19.209664 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fc1ebf7d6a8d5cb2f32ffe9101c8f1738743e4cea13f6ac915aaca0849d3bd83-rootfs.mount: Deactivated successfully. May 14 00:38:19.218996 env[1216]: time="2025-05-14T00:38:19.218940124Z" level=info msg="shim disconnected" id=fc1ebf7d6a8d5cb2f32ffe9101c8f1738743e4cea13f6ac915aaca0849d3bd83 May 14 00:38:19.218996 env[1216]: time="2025-05-14T00:38:19.218993485Z" level=warning msg="cleaning up after shim disconnected" id=fc1ebf7d6a8d5cb2f32ffe9101c8f1738743e4cea13f6ac915aaca0849d3bd83 namespace=k8s.io May 14 00:38:19.219199 env[1216]: time="2025-05-14T00:38:19.219002965Z" level=info msg="cleaning up dead shim" May 14 00:38:19.219777 env[1216]: time="2025-05-14T00:38:19.219726817Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 00:38:19.226200 env[1216]: time="2025-05-14T00:38:19.226169771Z" level=info msg="StopContainer for \"3bce79c7a93280d692259fa11b20cdb22e0148ac655e27300d79a5279fae7673\" with timeout 2 (s)" May 14 00:38:19.226670 env[1216]: time="2025-05-14T00:38:19.226646060Z" level=info msg="Stop container \"3bce79c7a93280d692259fa11b20cdb22e0148ac655e27300d79a5279fae7673\" with signal terminated" May 14 00:38:19.227781 env[1216]: time="2025-05-14T00:38:19.227747679Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:38:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3676 runtime=io.containerd.runc.v2\n" May 14 00:38:19.229918 env[1216]: time="2025-05-14T00:38:19.229865157Z" level=info msg="StopContainer for \"fc1ebf7d6a8d5cb2f32ffe9101c8f1738743e4cea13f6ac915aaca0849d3bd83\" returns successfully" May 14 00:38:19.230388 env[1216]: time="2025-05-14T00:38:19.230358485Z" level=info msg="StopPodSandbox for \"dd6ec7483da1464e1be16470197e1d5c921d30f60d4dac3fadb5e5c17fedc739\"" May 14 00:38:19.230537 env[1216]: time="2025-05-14T00:38:19.230514208Z" level=info msg="Container to stop \"fc1ebf7d6a8d5cb2f32ffe9101c8f1738743e4cea13f6ac915aaca0849d3bd83\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:38:19.232343 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dd6ec7483da1464e1be16470197e1d5c921d30f60d4dac3fadb5e5c17fedc739-shm.mount: Deactivated successfully. May 14 00:38:19.234502 systemd-networkd[1055]: lxc_health: Link DOWN May 14 00:38:19.234506 systemd-networkd[1055]: lxc_health: Lost carrier May 14 00:38:19.241328 systemd[1]: cri-containerd-dd6ec7483da1464e1be16470197e1d5c921d30f60d4dac3fadb5e5c17fedc739.scope: Deactivated successfully. May 14 00:38:19.257728 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd6ec7483da1464e1be16470197e1d5c921d30f60d4dac3fadb5e5c17fedc739-rootfs.mount: Deactivated successfully. May 14 00:38:19.263546 env[1216]: time="2025-05-14T00:38:19.263498111Z" level=info msg="shim disconnected" id=dd6ec7483da1464e1be16470197e1d5c921d30f60d4dac3fadb5e5c17fedc739 May 14 00:38:19.263546 env[1216]: time="2025-05-14T00:38:19.263545632Z" level=warning msg="cleaning up after shim disconnected" id=dd6ec7483da1464e1be16470197e1d5c921d30f60d4dac3fadb5e5c17fedc739 namespace=k8s.io May 14 00:38:19.263546 env[1216]: time="2025-05-14T00:38:19.263555272Z" level=info msg="cleaning up dead shim" May 14 00:38:19.270284 systemd[1]: cri-containerd-3bce79c7a93280d692259fa11b20cdb22e0148ac655e27300d79a5279fae7673.scope: Deactivated successfully. May 14 00:38:19.270591 systemd[1]: cri-containerd-3bce79c7a93280d692259fa11b20cdb22e0148ac655e27300d79a5279fae7673.scope: Consumed 6.397s CPU time. May 14 00:38:19.271989 env[1216]: time="2025-05-14T00:38:19.271939900Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:38:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3719 runtime=io.containerd.runc.v2\n" May 14 00:38:19.272275 env[1216]: time="2025-05-14T00:38:19.272241145Z" level=info msg="TearDown network for sandbox \"dd6ec7483da1464e1be16470197e1d5c921d30f60d4dac3fadb5e5c17fedc739\" successfully" May 14 00:38:19.272275 env[1216]: time="2025-05-14T00:38:19.272266866Z" level=info msg="StopPodSandbox for \"dd6ec7483da1464e1be16470197e1d5c921d30f60d4dac3fadb5e5c17fedc739\" returns successfully" May 14 00:38:19.289616 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3bce79c7a93280d692259fa11b20cdb22e0148ac655e27300d79a5279fae7673-rootfs.mount: Deactivated successfully. May 14 00:38:19.295423 env[1216]: time="2025-05-14T00:38:19.295379114Z" level=info msg="shim disconnected" id=3bce79c7a93280d692259fa11b20cdb22e0148ac655e27300d79a5279fae7673 May 14 00:38:19.295675 env[1216]: time="2025-05-14T00:38:19.295656399Z" level=warning msg="cleaning up after shim disconnected" id=3bce79c7a93280d692259fa11b20cdb22e0148ac655e27300d79a5279fae7673 namespace=k8s.io May 14 00:38:19.295757 env[1216]: time="2025-05-14T00:38:19.295739120Z" level=info msg="cleaning up dead shim" May 14 00:38:19.302520 env[1216]: time="2025-05-14T00:38:19.302481400Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:38:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3743 runtime=io.containerd.runc.v2\n" May 14 00:38:19.304467 env[1216]: time="2025-05-14T00:38:19.304431674Z" level=info msg="StopContainer for \"3bce79c7a93280d692259fa11b20cdb22e0148ac655e27300d79a5279fae7673\" returns successfully" May 14 00:38:19.305040 env[1216]: time="2025-05-14T00:38:19.305015324Z" level=info msg="StopPodSandbox for \"611ae07fac56bdc83a047d08d0a1efa36d0067a818b3a5569e0526254ec0673e\"" May 14 00:38:19.305119 env[1216]: time="2025-05-14T00:38:19.305076645Z" level=info msg="Container to stop \"1c1d28bed8ae0bdd5af6b8eb8a69908476e983aa834a41c81afdb7f91a3aa02f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:38:19.305119 env[1216]: time="2025-05-14T00:38:19.305091686Z" level=info msg="Container to stop \"bc03870cbc22ab615da01f34139a0e4f8a6eb1fbfc4529b19e4b5ad73d5821cf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:38:19.305119 env[1216]: time="2025-05-14T00:38:19.305104006Z" level=info msg="Container to stop \"f09ae7a653338b464c8373e39977932bcdbf70242568ce6a35caf027909fe540\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:38:19.305119 env[1216]: time="2025-05-14T00:38:19.305115726Z" level=info msg="Container to stop \"f243892b495601f7f29ecbec3262922f40c491d91d5b52fea0b1bde06cc8f28f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:38:19.305256 env[1216]: time="2025-05-14T00:38:19.305129526Z" level=info msg="Container to stop \"3bce79c7a93280d692259fa11b20cdb22e0148ac655e27300d79a5279fae7673\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:38:19.309959 systemd[1]: cri-containerd-611ae07fac56bdc83a047d08d0a1efa36d0067a818b3a5569e0526254ec0673e.scope: Deactivated successfully. May 14 00:38:19.330198 env[1216]: time="2025-05-14T00:38:19.330131768Z" level=info msg="shim disconnected" id=611ae07fac56bdc83a047d08d0a1efa36d0067a818b3a5569e0526254ec0673e May 14 00:38:19.330198 env[1216]: time="2025-05-14T00:38:19.330188929Z" level=warning msg="cleaning up after shim disconnected" id=611ae07fac56bdc83a047d08d0a1efa36d0067a818b3a5569e0526254ec0673e namespace=k8s.io May 14 00:38:19.330198 env[1216]: time="2025-05-14T00:38:19.330198809Z" level=info msg="cleaning up dead shim" May 14 00:38:19.338206 env[1216]: time="2025-05-14T00:38:19.337822024Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:38:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3774 runtime=io.containerd.runc.v2\n" May 14 00:38:19.338206 env[1216]: time="2025-05-14T00:38:19.338144790Z" level=info msg="TearDown network for sandbox \"611ae07fac56bdc83a047d08d0a1efa36d0067a818b3a5569e0526254ec0673e\" successfully" May 14 00:38:19.338206 env[1216]: time="2025-05-14T00:38:19.338168950Z" level=info msg="StopPodSandbox for \"611ae07fac56bdc83a047d08d0a1efa36d0067a818b3a5569e0526254ec0673e\" returns successfully" May 14 00:38:19.429534 kubelet[2019]: I0514 00:38:19.429425 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9ktp\" (UniqueName: \"kubernetes.io/projected/700f1054-d2a0-48a6-85f3-aeb90e95832a-kube-api-access-b9ktp\") pod \"700f1054-d2a0-48a6-85f3-aeb90e95832a\" (UID: \"700f1054-d2a0-48a6-85f3-aeb90e95832a\") " May 14 00:38:19.429534 kubelet[2019]: I0514 00:38:19.429471 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/700f1054-d2a0-48a6-85f3-aeb90e95832a-clustermesh-secrets\") pod \"700f1054-d2a0-48a6-85f3-aeb90e95832a\" (UID: \"700f1054-d2a0-48a6-85f3-aeb90e95832a\") " May 14 00:38:19.429534 kubelet[2019]: I0514 00:38:19.429492 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w72xk\" (UniqueName: \"kubernetes.io/projected/efd62912-b8d5-4e30-bff1-ff187229b969-kube-api-access-w72xk\") pod \"efd62912-b8d5-4e30-bff1-ff187229b969\" (UID: \"efd62912-b8d5-4e30-bff1-ff187229b969\") " May 14 00:38:19.429534 kubelet[2019]: I0514 00:38:19.429510 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/700f1054-d2a0-48a6-85f3-aeb90e95832a-hubble-tls\") pod \"700f1054-d2a0-48a6-85f3-aeb90e95832a\" (UID: \"700f1054-d2a0-48a6-85f3-aeb90e95832a\") " May 14 00:38:19.429534 kubelet[2019]: I0514 00:38:19.429526 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/700f1054-d2a0-48a6-85f3-aeb90e95832a-xtables-lock\") pod \"700f1054-d2a0-48a6-85f3-aeb90e95832a\" (UID: \"700f1054-d2a0-48a6-85f3-aeb90e95832a\") " May 14 00:38:19.429534 kubelet[2019]: I0514 00:38:19.429539 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/700f1054-d2a0-48a6-85f3-aeb90e95832a-cilium-run\") pod \"700f1054-d2a0-48a6-85f3-aeb90e95832a\" (UID: \"700f1054-d2a0-48a6-85f3-aeb90e95832a\") " May 14 00:38:19.430151 kubelet[2019]: I0514 00:38:19.429553 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/700f1054-d2a0-48a6-85f3-aeb90e95832a-etc-cni-netd\") pod \"700f1054-d2a0-48a6-85f3-aeb90e95832a\" (UID: \"700f1054-d2a0-48a6-85f3-aeb90e95832a\") " May 14 00:38:19.430151 kubelet[2019]: I0514 00:38:19.429568 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/700f1054-d2a0-48a6-85f3-aeb90e95832a-host-proc-sys-net\") pod \"700f1054-d2a0-48a6-85f3-aeb90e95832a\" (UID: \"700f1054-d2a0-48a6-85f3-aeb90e95832a\") " May 14 00:38:19.430151 kubelet[2019]: I0514 00:38:19.429585 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/700f1054-d2a0-48a6-85f3-aeb90e95832a-cilium-config-path\") pod \"700f1054-d2a0-48a6-85f3-aeb90e95832a\" (UID: \"700f1054-d2a0-48a6-85f3-aeb90e95832a\") " May 14 00:38:19.430151 kubelet[2019]: I0514 00:38:19.429599 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/700f1054-d2a0-48a6-85f3-aeb90e95832a-host-proc-sys-kernel\") pod \"700f1054-d2a0-48a6-85f3-aeb90e95832a\" (UID: \"700f1054-d2a0-48a6-85f3-aeb90e95832a\") " May 14 00:38:19.430151 kubelet[2019]: I0514 00:38:19.429619 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/700f1054-d2a0-48a6-85f3-aeb90e95832a-lib-modules\") pod \"700f1054-d2a0-48a6-85f3-aeb90e95832a\" (UID: \"700f1054-d2a0-48a6-85f3-aeb90e95832a\") " May 14 00:38:19.430151 kubelet[2019]: I0514 00:38:19.429632 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/700f1054-d2a0-48a6-85f3-aeb90e95832a-cilium-cgroup\") pod \"700f1054-d2a0-48a6-85f3-aeb90e95832a\" (UID: \"700f1054-d2a0-48a6-85f3-aeb90e95832a\") " May 14 00:38:19.430295 kubelet[2019]: I0514 00:38:19.429649 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/700f1054-d2a0-48a6-85f3-aeb90e95832a-hostproc\") pod \"700f1054-d2a0-48a6-85f3-aeb90e95832a\" (UID: \"700f1054-d2a0-48a6-85f3-aeb90e95832a\") " May 14 00:38:19.430295 kubelet[2019]: I0514 00:38:19.429662 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/700f1054-d2a0-48a6-85f3-aeb90e95832a-cni-path\") pod \"700f1054-d2a0-48a6-85f3-aeb90e95832a\" (UID: \"700f1054-d2a0-48a6-85f3-aeb90e95832a\") " May 14 00:38:19.430295 kubelet[2019]: I0514 00:38:19.429677 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/700f1054-d2a0-48a6-85f3-aeb90e95832a-bpf-maps\") pod \"700f1054-d2a0-48a6-85f3-aeb90e95832a\" (UID: \"700f1054-d2a0-48a6-85f3-aeb90e95832a\") " May 14 00:38:19.430295 kubelet[2019]: I0514 00:38:19.429693 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/efd62912-b8d5-4e30-bff1-ff187229b969-cilium-config-path\") pod \"efd62912-b8d5-4e30-bff1-ff187229b969\" (UID: \"efd62912-b8d5-4e30-bff1-ff187229b969\") " May 14 00:38:19.433032 kubelet[2019]: I0514 00:38:19.432977 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/700f1054-d2a0-48a6-85f3-aeb90e95832a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "700f1054-d2a0-48a6-85f3-aeb90e95832a" (UID: "700f1054-d2a0-48a6-85f3-aeb90e95832a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:38:19.433115 kubelet[2019]: I0514 00:38:19.433062 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/700f1054-d2a0-48a6-85f3-aeb90e95832a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "700f1054-d2a0-48a6-85f3-aeb90e95832a" (UID: "700f1054-d2a0-48a6-85f3-aeb90e95832a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:38:19.433115 kubelet[2019]: I0514 00:38:19.433079 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/700f1054-d2a0-48a6-85f3-aeb90e95832a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "700f1054-d2a0-48a6-85f3-aeb90e95832a" (UID: "700f1054-d2a0-48a6-85f3-aeb90e95832a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:38:19.433115 kubelet[2019]: I0514 00:38:19.433094 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/700f1054-d2a0-48a6-85f3-aeb90e95832a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "700f1054-d2a0-48a6-85f3-aeb90e95832a" (UID: "700f1054-d2a0-48a6-85f3-aeb90e95832a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:38:19.438909 kubelet[2019]: I0514 00:38:19.438868 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efd62912-b8d5-4e30-bff1-ff187229b969-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "efd62912-b8d5-4e30-bff1-ff187229b969" (UID: "efd62912-b8d5-4e30-bff1-ff187229b969"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 14 00:38:19.439021 kubelet[2019]: I0514 00:38:19.438935 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/700f1054-d2a0-48a6-85f3-aeb90e95832a-hostproc" (OuterVolumeSpecName: "hostproc") pod "700f1054-d2a0-48a6-85f3-aeb90e95832a" (UID: "700f1054-d2a0-48a6-85f3-aeb90e95832a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:38:19.439021 kubelet[2019]: I0514 00:38:19.438955 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/700f1054-d2a0-48a6-85f3-aeb90e95832a-cni-path" (OuterVolumeSpecName: "cni-path") pod "700f1054-d2a0-48a6-85f3-aeb90e95832a" (UID: "700f1054-d2a0-48a6-85f3-aeb90e95832a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:38:19.439021 kubelet[2019]: I0514 00:38:19.438973 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/700f1054-d2a0-48a6-85f3-aeb90e95832a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "700f1054-d2a0-48a6-85f3-aeb90e95832a" (UID: "700f1054-d2a0-48a6-85f3-aeb90e95832a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:38:19.440168 kubelet[2019]: I0514 00:38:19.440109 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/700f1054-d2a0-48a6-85f3-aeb90e95832a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "700f1054-d2a0-48a6-85f3-aeb90e95832a" (UID: "700f1054-d2a0-48a6-85f3-aeb90e95832a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:38:19.441896 kubelet[2019]: I0514 00:38:19.441847 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/700f1054-d2a0-48a6-85f3-aeb90e95832a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "700f1054-d2a0-48a6-85f3-aeb90e95832a" (UID: "700f1054-d2a0-48a6-85f3-aeb90e95832a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 14 00:38:19.441969 kubelet[2019]: I0514 00:38:19.441902 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/700f1054-d2a0-48a6-85f3-aeb90e95832a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "700f1054-d2a0-48a6-85f3-aeb90e95832a" (UID: "700f1054-d2a0-48a6-85f3-aeb90e95832a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:38:19.441969 kubelet[2019]: I0514 00:38:19.441928 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/700f1054-d2a0-48a6-85f3-aeb90e95832a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "700f1054-d2a0-48a6-85f3-aeb90e95832a" (UID: "700f1054-d2a0-48a6-85f3-aeb90e95832a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:38:19.443787 kubelet[2019]: I0514 00:38:19.443745 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efd62912-b8d5-4e30-bff1-ff187229b969-kube-api-access-w72xk" (OuterVolumeSpecName: "kube-api-access-w72xk") pod "efd62912-b8d5-4e30-bff1-ff187229b969" (UID: "efd62912-b8d5-4e30-bff1-ff187229b969"). InnerVolumeSpecName "kube-api-access-w72xk". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 00:38:19.443787 kubelet[2019]: I0514 00:38:19.443758 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/700f1054-d2a0-48a6-85f3-aeb90e95832a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "700f1054-d2a0-48a6-85f3-aeb90e95832a" (UID: "700f1054-d2a0-48a6-85f3-aeb90e95832a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 14 00:38:19.444011 kubelet[2019]: I0514 00:38:19.443985 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/700f1054-d2a0-48a6-85f3-aeb90e95832a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "700f1054-d2a0-48a6-85f3-aeb90e95832a" (UID: "700f1054-d2a0-48a6-85f3-aeb90e95832a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 00:38:19.444211 kubelet[2019]: I0514 00:38:19.444191 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/700f1054-d2a0-48a6-85f3-aeb90e95832a-kube-api-access-b9ktp" (OuterVolumeSpecName: "kube-api-access-b9ktp") pod "700f1054-d2a0-48a6-85f3-aeb90e95832a" (UID: "700f1054-d2a0-48a6-85f3-aeb90e95832a"). InnerVolumeSpecName "kube-api-access-b9ktp". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 00:38:19.530206 kubelet[2019]: I0514 00:38:19.530163 2019 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/700f1054-d2a0-48a6-85f3-aeb90e95832a-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 14 00:38:19.530206 kubelet[2019]: I0514 00:38:19.530195 2019 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/700f1054-d2a0-48a6-85f3-aeb90e95832a-lib-modules\") on node \"localhost\" DevicePath \"\"" May 14 00:38:19.530206 kubelet[2019]: I0514 00:38:19.530205 2019 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/700f1054-d2a0-48a6-85f3-aeb90e95832a-hostproc\") on node \"localhost\" DevicePath \"\"" May 14 00:38:19.530206 kubelet[2019]: I0514 00:38:19.530213 2019 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/700f1054-d2a0-48a6-85f3-aeb90e95832a-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 14 00:38:19.530409 kubelet[2019]: I0514 00:38:19.530222 2019 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/efd62912-b8d5-4e30-bff1-ff187229b969-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 14 00:38:19.530409 kubelet[2019]: I0514 00:38:19.530231 2019 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/700f1054-d2a0-48a6-85f3-aeb90e95832a-cni-path\") on node \"localhost\" DevicePath \"\"" May 14 00:38:19.530409 kubelet[2019]: I0514 00:38:19.530239 2019 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-b9ktp\" (UniqueName: \"kubernetes.io/projected/700f1054-d2a0-48a6-85f3-aeb90e95832a-kube-api-access-b9ktp\") on node \"localhost\" DevicePath \"\"" May 14 00:38:19.530409 kubelet[2019]: I0514 00:38:19.530247 2019 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/700f1054-d2a0-48a6-85f3-aeb90e95832a-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 14 00:38:19.530409 kubelet[2019]: I0514 00:38:19.530256 2019 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-w72xk\" (UniqueName: \"kubernetes.io/projected/efd62912-b8d5-4e30-bff1-ff187229b969-kube-api-access-w72xk\") on node \"localhost\" DevicePath \"\"" May 14 00:38:19.530409 kubelet[2019]: I0514 00:38:19.530264 2019 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/700f1054-d2a0-48a6-85f3-aeb90e95832a-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 14 00:38:19.530409 kubelet[2019]: I0514 00:38:19.530270 2019 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/700f1054-d2a0-48a6-85f3-aeb90e95832a-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 14 00:38:19.530409 kubelet[2019]: I0514 00:38:19.530278 2019 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/700f1054-d2a0-48a6-85f3-aeb90e95832a-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 14 00:38:19.530592 kubelet[2019]: I0514 00:38:19.530285 2019 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/700f1054-d2a0-48a6-85f3-aeb90e95832a-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 14 00:38:19.530592 kubelet[2019]: I0514 00:38:19.530292 2019 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/700f1054-d2a0-48a6-85f3-aeb90e95832a-cilium-run\") on node \"localhost\" DevicePath \"\"" May 14 00:38:19.530592 kubelet[2019]: I0514 00:38:19.530299 2019 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/700f1054-d2a0-48a6-85f3-aeb90e95832a-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 14 00:38:19.530592 kubelet[2019]: I0514 00:38:19.530307 2019 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/700f1054-d2a0-48a6-85f3-aeb90e95832a-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 14 00:38:19.833209 systemd[1]: Removed slice kubepods-besteffort-podefd62912_b8d5_4e30_bff1_ff187229b969.slice. May 14 00:38:19.834207 systemd[1]: Removed slice kubepods-burstable-pod700f1054_d2a0_48a6_85f3_aeb90e95832a.slice. May 14 00:38:19.834285 systemd[1]: kubepods-burstable-pod700f1054_d2a0_48a6_85f3_aeb90e95832a.slice: Consumed 6.655s CPU time. May 14 00:38:19.980083 kubelet[2019]: I0514 00:38:19.980051 2019 scope.go:117] "RemoveContainer" containerID="fc1ebf7d6a8d5cb2f32ffe9101c8f1738743e4cea13f6ac915aaca0849d3bd83" May 14 00:38:19.982301 env[1216]: time="2025-05-14T00:38:19.982262409Z" level=info msg="RemoveContainer for \"fc1ebf7d6a8d5cb2f32ffe9101c8f1738743e4cea13f6ac915aaca0849d3bd83\"" May 14 00:38:19.987105 env[1216]: time="2025-05-14T00:38:19.987068654Z" level=info msg="RemoveContainer for \"fc1ebf7d6a8d5cb2f32ffe9101c8f1738743e4cea13f6ac915aaca0849d3bd83\" returns successfully" May 14 00:38:19.987488 kubelet[2019]: I0514 00:38:19.987467 2019 scope.go:117] "RemoveContainer" containerID="fc1ebf7d6a8d5cb2f32ffe9101c8f1738743e4cea13f6ac915aaca0849d3bd83" May 14 00:38:19.988156 env[1216]: time="2025-05-14T00:38:19.988093792Z" level=error msg="ContainerStatus for \"fc1ebf7d6a8d5cb2f32ffe9101c8f1738743e4cea13f6ac915aaca0849d3bd83\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fc1ebf7d6a8d5cb2f32ffe9101c8f1738743e4cea13f6ac915aaca0849d3bd83\": not found" May 14 00:38:19.988430 kubelet[2019]: E0514 00:38:19.988318 2019 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fc1ebf7d6a8d5cb2f32ffe9101c8f1738743e4cea13f6ac915aaca0849d3bd83\": not found" containerID="fc1ebf7d6a8d5cb2f32ffe9101c8f1738743e4cea13f6ac915aaca0849d3bd83" May 14 00:38:19.988430 kubelet[2019]: I0514 00:38:19.988345 2019 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fc1ebf7d6a8d5cb2f32ffe9101c8f1738743e4cea13f6ac915aaca0849d3bd83"} err="failed to get container status \"fc1ebf7d6a8d5cb2f32ffe9101c8f1738743e4cea13f6ac915aaca0849d3bd83\": rpc error: code = NotFound desc = an error occurred when try to find container \"fc1ebf7d6a8d5cb2f32ffe9101c8f1738743e4cea13f6ac915aaca0849d3bd83\": not found" May 14 00:38:19.988430 kubelet[2019]: I0514 00:38:19.988427 2019 scope.go:117] "RemoveContainer" containerID="3bce79c7a93280d692259fa11b20cdb22e0148ac655e27300d79a5279fae7673" May 14 00:38:19.989294 env[1216]: time="2025-05-14T00:38:19.989250773Z" level=info msg="RemoveContainer for \"3bce79c7a93280d692259fa11b20cdb22e0148ac655e27300d79a5279fae7673\"" May 14 00:38:19.991888 env[1216]: time="2025-05-14T00:38:19.991845498Z" level=info msg="RemoveContainer for \"3bce79c7a93280d692259fa11b20cdb22e0148ac655e27300d79a5279fae7673\" returns successfully" May 14 00:38:19.992968 kubelet[2019]: I0514 00:38:19.992045 2019 scope.go:117] "RemoveContainer" containerID="f243892b495601f7f29ecbec3262922f40c491d91d5b52fea0b1bde06cc8f28f" May 14 00:38:19.993482 env[1216]: time="2025-05-14T00:38:19.993456527Z" level=info msg="RemoveContainer for \"f243892b495601f7f29ecbec3262922f40c491d91d5b52fea0b1bde06cc8f28f\"" May 14 00:38:19.995851 env[1216]: time="2025-05-14T00:38:19.995814969Z" level=info msg="RemoveContainer for \"f243892b495601f7f29ecbec3262922f40c491d91d5b52fea0b1bde06cc8f28f\" returns successfully" May 14 00:38:19.996197 kubelet[2019]: I0514 00:38:19.996179 2019 scope.go:117] "RemoveContainer" containerID="bc03870cbc22ab615da01f34139a0e4f8a6eb1fbfc4529b19e4b5ad73d5821cf" May 14 00:38:19.997153 env[1216]: time="2025-05-14T00:38:19.997098871Z" level=info msg="RemoveContainer for \"bc03870cbc22ab615da01f34139a0e4f8a6eb1fbfc4529b19e4b5ad73d5821cf\"" May 14 00:38:19.999569 env[1216]: time="2025-05-14T00:38:19.999532234Z" level=info msg="RemoveContainer for \"bc03870cbc22ab615da01f34139a0e4f8a6eb1fbfc4529b19e4b5ad73d5821cf\" returns successfully" May 14 00:38:20.000006 kubelet[2019]: I0514 00:38:19.999986 2019 scope.go:117] "RemoveContainer" containerID="f09ae7a653338b464c8373e39977932bcdbf70242568ce6a35caf027909fe540" May 14 00:38:20.002572 env[1216]: time="2025-05-14T00:38:20.002543766Z" level=info msg="RemoveContainer for \"f09ae7a653338b464c8373e39977932bcdbf70242568ce6a35caf027909fe540\"" May 14 00:38:20.004937 env[1216]: time="2025-05-14T00:38:20.004906967Z" level=info msg="RemoveContainer for \"f09ae7a653338b464c8373e39977932bcdbf70242568ce6a35caf027909fe540\" returns successfully" May 14 00:38:20.005221 kubelet[2019]: I0514 00:38:20.005202 2019 scope.go:117] "RemoveContainer" containerID="1c1d28bed8ae0bdd5af6b8eb8a69908476e983aa834a41c81afdb7f91a3aa02f" May 14 00:38:20.006394 env[1216]: time="2025-05-14T00:38:20.006367232Z" level=info msg="RemoveContainer for \"1c1d28bed8ae0bdd5af6b8eb8a69908476e983aa834a41c81afdb7f91a3aa02f\"" May 14 00:38:20.008634 env[1216]: time="2025-05-14T00:38:20.008607950Z" level=info msg="RemoveContainer for \"1c1d28bed8ae0bdd5af6b8eb8a69908476e983aa834a41c81afdb7f91a3aa02f\" returns successfully" May 14 00:38:20.008919 kubelet[2019]: I0514 00:38:20.008898 2019 scope.go:117] "RemoveContainer" containerID="3bce79c7a93280d692259fa11b20cdb22e0148ac655e27300d79a5279fae7673" May 14 00:38:20.009244 env[1216]: time="2025-05-14T00:38:20.009172000Z" level=error msg="ContainerStatus for \"3bce79c7a93280d692259fa11b20cdb22e0148ac655e27300d79a5279fae7673\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3bce79c7a93280d692259fa11b20cdb22e0148ac655e27300d79a5279fae7673\": not found" May 14 00:38:20.009420 kubelet[2019]: E0514 00:38:20.009392 2019 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3bce79c7a93280d692259fa11b20cdb22e0148ac655e27300d79a5279fae7673\": not found" containerID="3bce79c7a93280d692259fa11b20cdb22e0148ac655e27300d79a5279fae7673" May 14 00:38:20.009530 kubelet[2019]: I0514 00:38:20.009505 2019 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3bce79c7a93280d692259fa11b20cdb22e0148ac655e27300d79a5279fae7673"} err="failed to get container status \"3bce79c7a93280d692259fa11b20cdb22e0148ac655e27300d79a5279fae7673\": rpc error: code = NotFound desc = an error occurred when try to find container \"3bce79c7a93280d692259fa11b20cdb22e0148ac655e27300d79a5279fae7673\": not found" May 14 00:38:20.009600 kubelet[2019]: I0514 00:38:20.009589 2019 scope.go:117] "RemoveContainer" containerID="f243892b495601f7f29ecbec3262922f40c491d91d5b52fea0b1bde06cc8f28f" May 14 00:38:20.009943 env[1216]: time="2025-05-14T00:38:20.009896132Z" level=error msg="ContainerStatus for \"f243892b495601f7f29ecbec3262922f40c491d91d5b52fea0b1bde06cc8f28f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f243892b495601f7f29ecbec3262922f40c491d91d5b52fea0b1bde06cc8f28f\": not found" May 14 00:38:20.010178 kubelet[2019]: E0514 00:38:20.010158 2019 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f243892b495601f7f29ecbec3262922f40c491d91d5b52fea0b1bde06cc8f28f\": not found" containerID="f243892b495601f7f29ecbec3262922f40c491d91d5b52fea0b1bde06cc8f28f" May 14 00:38:20.010275 kubelet[2019]: I0514 00:38:20.010256 2019 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f243892b495601f7f29ecbec3262922f40c491d91d5b52fea0b1bde06cc8f28f"} err="failed to get container status \"f243892b495601f7f29ecbec3262922f40c491d91d5b52fea0b1bde06cc8f28f\": rpc error: code = NotFound desc = an error occurred when try to find container \"f243892b495601f7f29ecbec3262922f40c491d91d5b52fea0b1bde06cc8f28f\": not found" May 14 00:38:20.010341 kubelet[2019]: I0514 00:38:20.010330 2019 scope.go:117] "RemoveContainer" containerID="bc03870cbc22ab615da01f34139a0e4f8a6eb1fbfc4529b19e4b5ad73d5821cf" May 14 00:38:20.010590 env[1216]: time="2025-05-14T00:38:20.010547743Z" level=error msg="ContainerStatus for \"bc03870cbc22ab615da01f34139a0e4f8a6eb1fbfc4529b19e4b5ad73d5821cf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bc03870cbc22ab615da01f34139a0e4f8a6eb1fbfc4529b19e4b5ad73d5821cf\": not found" May 14 00:38:20.010752 kubelet[2019]: E0514 00:38:20.010726 2019 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bc03870cbc22ab615da01f34139a0e4f8a6eb1fbfc4529b19e4b5ad73d5821cf\": not found" containerID="bc03870cbc22ab615da01f34139a0e4f8a6eb1fbfc4529b19e4b5ad73d5821cf" May 14 00:38:20.010861 kubelet[2019]: I0514 00:38:20.010840 2019 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bc03870cbc22ab615da01f34139a0e4f8a6eb1fbfc4529b19e4b5ad73d5821cf"} err="failed to get container status \"bc03870cbc22ab615da01f34139a0e4f8a6eb1fbfc4529b19e4b5ad73d5821cf\": rpc error: code = NotFound desc = an error occurred when try to find container \"bc03870cbc22ab615da01f34139a0e4f8a6eb1fbfc4529b19e4b5ad73d5821cf\": not found" May 14 00:38:20.010957 kubelet[2019]: I0514 00:38:20.010943 2019 scope.go:117] "RemoveContainer" containerID="f09ae7a653338b464c8373e39977932bcdbf70242568ce6a35caf027909fe540" May 14 00:38:20.011250 env[1216]: time="2025-05-14T00:38:20.011198034Z" level=error msg="ContainerStatus for \"f09ae7a653338b464c8373e39977932bcdbf70242568ce6a35caf027909fe540\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f09ae7a653338b464c8373e39977932bcdbf70242568ce6a35caf027909fe540\": not found" May 14 00:38:20.011497 kubelet[2019]: E0514 00:38:20.011475 2019 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f09ae7a653338b464c8373e39977932bcdbf70242568ce6a35caf027909fe540\": not found" containerID="f09ae7a653338b464c8373e39977932bcdbf70242568ce6a35caf027909fe540" May 14 00:38:20.011621 kubelet[2019]: I0514 00:38:20.011590 2019 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f09ae7a653338b464c8373e39977932bcdbf70242568ce6a35caf027909fe540"} err="failed to get container status \"f09ae7a653338b464c8373e39977932bcdbf70242568ce6a35caf027909fe540\": rpc error: code = NotFound desc = an error occurred when try to find container \"f09ae7a653338b464c8373e39977932bcdbf70242568ce6a35caf027909fe540\": not found" May 14 00:38:20.011698 kubelet[2019]: I0514 00:38:20.011684 2019 scope.go:117] "RemoveContainer" containerID="1c1d28bed8ae0bdd5af6b8eb8a69908476e983aa834a41c81afdb7f91a3aa02f" May 14 00:38:20.012026 env[1216]: time="2025-05-14T00:38:20.011965567Z" level=error msg="ContainerStatus for \"1c1d28bed8ae0bdd5af6b8eb8a69908476e983aa834a41c81afdb7f91a3aa02f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1c1d28bed8ae0bdd5af6b8eb8a69908476e983aa834a41c81afdb7f91a3aa02f\": not found" May 14 00:38:20.012233 kubelet[2019]: E0514 00:38:20.012211 2019 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1c1d28bed8ae0bdd5af6b8eb8a69908476e983aa834a41c81afdb7f91a3aa02f\": not found" containerID="1c1d28bed8ae0bdd5af6b8eb8a69908476e983aa834a41c81afdb7f91a3aa02f" May 14 00:38:20.012306 kubelet[2019]: I0514 00:38:20.012241 2019 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1c1d28bed8ae0bdd5af6b8eb8a69908476e983aa834a41c81afdb7f91a3aa02f"} err="failed to get container status \"1c1d28bed8ae0bdd5af6b8eb8a69908476e983aa834a41c81afdb7f91a3aa02f\": rpc error: code = NotFound desc = an error occurred when try to find container \"1c1d28bed8ae0bdd5af6b8eb8a69908476e983aa834a41c81afdb7f91a3aa02f\": not found" May 14 00:38:20.178783 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-611ae07fac56bdc83a047d08d0a1efa36d0067a818b3a5569e0526254ec0673e-rootfs.mount: Deactivated successfully. May 14 00:38:20.178908 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-611ae07fac56bdc83a047d08d0a1efa36d0067a818b3a5569e0526254ec0673e-shm.mount: Deactivated successfully. May 14 00:38:20.178969 systemd[1]: var-lib-kubelet-pods-efd62912\x2db8d5\x2d4e30\x2dbff1\x2dff187229b969-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw72xk.mount: Deactivated successfully. May 14 00:38:20.179022 systemd[1]: var-lib-kubelet-pods-700f1054\x2dd2a0\x2d48a6\x2d85f3\x2daeb90e95832a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2db9ktp.mount: Deactivated successfully. May 14 00:38:20.179073 systemd[1]: var-lib-kubelet-pods-700f1054\x2dd2a0\x2d48a6\x2d85f3\x2daeb90e95832a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 14 00:38:20.179120 systemd[1]: var-lib-kubelet-pods-700f1054\x2dd2a0\x2d48a6\x2d85f3\x2daeb90e95832a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 14 00:38:21.140154 sshd[3630]: pam_unix(sshd:session): session closed for user core May 14 00:38:21.143478 systemd[1]: sshd@21-10.0.0.47:22-10.0.0.1:36504.service: Deactivated successfully. May 14 00:38:21.144115 systemd[1]: session-22.scope: Deactivated successfully. May 14 00:38:21.144280 systemd[1]: session-22.scope: Consumed 1.304s CPU time. May 14 00:38:21.144668 systemd-logind[1203]: Session 22 logged out. Waiting for processes to exit. May 14 00:38:21.146011 systemd[1]: Started sshd@22-10.0.0.47:22-10.0.0.1:36512.service. May 14 00:38:21.146782 systemd-logind[1203]: Removed session 22. May 14 00:38:21.187437 sshd[3794]: Accepted publickey for core from 10.0.0.1 port 36512 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:38:21.188727 sshd[3794]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:38:21.193097 systemd-logind[1203]: New session 23 of user core. May 14 00:38:21.193563 systemd[1]: Started session-23.scope. May 14 00:38:21.827464 kubelet[2019]: I0514 00:38:21.827428 2019 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="700f1054-d2a0-48a6-85f3-aeb90e95832a" path="/var/lib/kubelet/pods/700f1054-d2a0-48a6-85f3-aeb90e95832a/volumes" May 14 00:38:21.828378 kubelet[2019]: I0514 00:38:21.828353 2019 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efd62912-b8d5-4e30-bff1-ff187229b969" path="/var/lib/kubelet/pods/efd62912-b8d5-4e30-bff1-ff187229b969/volumes" May 14 00:38:21.930215 systemd[1]: Started sshd@23-10.0.0.47:22-10.0.0.1:36526.service. May 14 00:38:21.941508 sshd[3794]: pam_unix(sshd:session): session closed for user core May 14 00:38:21.946912 kubelet[2019]: I0514 00:38:21.946766 2019 topology_manager.go:215] "Topology Admit Handler" podUID="4cab8b26-d933-46c1-ad1f-02d67c3ea160" podNamespace="kube-system" podName="cilium-89bkz" May 14 00:38:21.946912 kubelet[2019]: E0514 00:38:21.946903 2019 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="700f1054-d2a0-48a6-85f3-aeb90e95832a" containerName="mount-cgroup" May 14 00:38:21.946912 kubelet[2019]: E0514 00:38:21.946916 2019 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="700f1054-d2a0-48a6-85f3-aeb90e95832a" containerName="mount-bpf-fs" May 14 00:38:21.947067 kubelet[2019]: E0514 00:38:21.946924 2019 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="700f1054-d2a0-48a6-85f3-aeb90e95832a" containerName="apply-sysctl-overwrites" May 14 00:38:21.947067 kubelet[2019]: E0514 00:38:21.946930 2019 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="efd62912-b8d5-4e30-bff1-ff187229b969" containerName="cilium-operator" May 14 00:38:21.947067 kubelet[2019]: E0514 00:38:21.946935 2019 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="700f1054-d2a0-48a6-85f3-aeb90e95832a" containerName="clean-cilium-state" May 14 00:38:21.947067 kubelet[2019]: E0514 00:38:21.946941 2019 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="700f1054-d2a0-48a6-85f3-aeb90e95832a" containerName="cilium-agent" May 14 00:38:21.947067 kubelet[2019]: I0514 00:38:21.946965 2019 memory_manager.go:354] "RemoveStaleState removing state" podUID="700f1054-d2a0-48a6-85f3-aeb90e95832a" containerName="cilium-agent" May 14 00:38:21.947067 kubelet[2019]: I0514 00:38:21.946971 2019 memory_manager.go:354] "RemoveStaleState removing state" podUID="efd62912-b8d5-4e30-bff1-ff187229b969" containerName="cilium-operator" May 14 00:38:21.949327 systemd[1]: session-23.scope: Deactivated successfully. May 14 00:38:21.950121 systemd[1]: sshd@22-10.0.0.47:22-10.0.0.1:36512.service: Deactivated successfully. May 14 00:38:21.951111 systemd-logind[1203]: Session 23 logged out. Waiting for processes to exit. May 14 00:38:21.954302 systemd-logind[1203]: Removed session 23. May 14 00:38:21.960030 systemd[1]: Created slice kubepods-burstable-pod4cab8b26_d933_46c1_ad1f_02d67c3ea160.slice. May 14 00:38:21.979250 sshd[3805]: Accepted publickey for core from 10.0.0.1 port 36526 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:38:21.981042 sshd[3805]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:38:21.984590 systemd-logind[1203]: New session 24 of user core. May 14 00:38:21.985377 systemd[1]: Started session-24.scope. May 14 00:38:22.042456 kubelet[2019]: I0514 00:38:22.042415 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4cab8b26-d933-46c1-ad1f-02d67c3ea160-cni-path\") pod \"cilium-89bkz\" (UID: \"4cab8b26-d933-46c1-ad1f-02d67c3ea160\") " pod="kube-system/cilium-89bkz" May 14 00:38:22.042456 kubelet[2019]: I0514 00:38:22.042461 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4cab8b26-d933-46c1-ad1f-02d67c3ea160-clustermesh-secrets\") pod \"cilium-89bkz\" (UID: \"4cab8b26-d933-46c1-ad1f-02d67c3ea160\") " pod="kube-system/cilium-89bkz" May 14 00:38:22.042636 kubelet[2019]: I0514 00:38:22.042480 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4cab8b26-d933-46c1-ad1f-02d67c3ea160-cilium-ipsec-secrets\") pod \"cilium-89bkz\" (UID: \"4cab8b26-d933-46c1-ad1f-02d67c3ea160\") " pod="kube-system/cilium-89bkz" May 14 00:38:22.042636 kubelet[2019]: I0514 00:38:22.042496 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4cab8b26-d933-46c1-ad1f-02d67c3ea160-cilium-run\") pod \"cilium-89bkz\" (UID: \"4cab8b26-d933-46c1-ad1f-02d67c3ea160\") " pod="kube-system/cilium-89bkz" May 14 00:38:22.042636 kubelet[2019]: I0514 00:38:22.042511 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4cab8b26-d933-46c1-ad1f-02d67c3ea160-bpf-maps\") pod \"cilium-89bkz\" (UID: \"4cab8b26-d933-46c1-ad1f-02d67c3ea160\") " pod="kube-system/cilium-89bkz" May 14 00:38:22.042636 kubelet[2019]: I0514 00:38:22.042527 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4cab8b26-d933-46c1-ad1f-02d67c3ea160-hostproc\") pod \"cilium-89bkz\" (UID: \"4cab8b26-d933-46c1-ad1f-02d67c3ea160\") " pod="kube-system/cilium-89bkz" May 14 00:38:22.042636 kubelet[2019]: I0514 00:38:22.042543 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4cab8b26-d933-46c1-ad1f-02d67c3ea160-cilium-cgroup\") pod \"cilium-89bkz\" (UID: \"4cab8b26-d933-46c1-ad1f-02d67c3ea160\") " pod="kube-system/cilium-89bkz" May 14 00:38:22.042636 kubelet[2019]: I0514 00:38:22.042558 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4cab8b26-d933-46c1-ad1f-02d67c3ea160-host-proc-sys-net\") pod \"cilium-89bkz\" (UID: \"4cab8b26-d933-46c1-ad1f-02d67c3ea160\") " pod="kube-system/cilium-89bkz" May 14 00:38:22.042818 kubelet[2019]: I0514 00:38:22.042574 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4cab8b26-d933-46c1-ad1f-02d67c3ea160-etc-cni-netd\") pod \"cilium-89bkz\" (UID: \"4cab8b26-d933-46c1-ad1f-02d67c3ea160\") " pod="kube-system/cilium-89bkz" May 14 00:38:22.042818 kubelet[2019]: I0514 00:38:22.042590 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4cab8b26-d933-46c1-ad1f-02d67c3ea160-xtables-lock\") pod \"cilium-89bkz\" (UID: \"4cab8b26-d933-46c1-ad1f-02d67c3ea160\") " pod="kube-system/cilium-89bkz" May 14 00:38:22.042818 kubelet[2019]: I0514 00:38:22.042605 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm7z6\" (UniqueName: \"kubernetes.io/projected/4cab8b26-d933-46c1-ad1f-02d67c3ea160-kube-api-access-hm7z6\") pod \"cilium-89bkz\" (UID: \"4cab8b26-d933-46c1-ad1f-02d67c3ea160\") " pod="kube-system/cilium-89bkz" May 14 00:38:22.042818 kubelet[2019]: I0514 00:38:22.042626 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4cab8b26-d933-46c1-ad1f-02d67c3ea160-cilium-config-path\") pod \"cilium-89bkz\" (UID: \"4cab8b26-d933-46c1-ad1f-02d67c3ea160\") " pod="kube-system/cilium-89bkz" May 14 00:38:22.042818 kubelet[2019]: I0514 00:38:22.042640 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4cab8b26-d933-46c1-ad1f-02d67c3ea160-lib-modules\") pod \"cilium-89bkz\" (UID: \"4cab8b26-d933-46c1-ad1f-02d67c3ea160\") " pod="kube-system/cilium-89bkz" May 14 00:38:22.043036 kubelet[2019]: I0514 00:38:22.042655 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4cab8b26-d933-46c1-ad1f-02d67c3ea160-host-proc-sys-kernel\") pod \"cilium-89bkz\" (UID: \"4cab8b26-d933-46c1-ad1f-02d67c3ea160\") " pod="kube-system/cilium-89bkz" May 14 00:38:22.043036 kubelet[2019]: I0514 00:38:22.042678 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4cab8b26-d933-46c1-ad1f-02d67c3ea160-hubble-tls\") pod \"cilium-89bkz\" (UID: \"4cab8b26-d933-46c1-ad1f-02d67c3ea160\") " pod="kube-system/cilium-89bkz" May 14 00:38:22.111055 sshd[3805]: pam_unix(sshd:session): session closed for user core May 14 00:38:22.114021 systemd[1]: Started sshd@24-10.0.0.47:22-10.0.0.1:36528.service. May 14 00:38:22.116867 systemd-logind[1203]: Session 24 logged out. Waiting for processes to exit. May 14 00:38:22.119531 systemd[1]: sshd@23-10.0.0.47:22-10.0.0.1:36526.service: Deactivated successfully. May 14 00:38:22.120014 kubelet[2019]: E0514 00:38:22.119974 2019 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-hm7z6 lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-89bkz" podUID="4cab8b26-d933-46c1-ad1f-02d67c3ea160" May 14 00:38:22.120235 systemd[1]: session-24.scope: Deactivated successfully. May 14 00:38:22.125771 systemd-logind[1203]: Removed session 24. May 14 00:38:22.164016 sshd[3820]: Accepted publickey for core from 10.0.0.1 port 36528 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:38:22.164815 sshd[3820]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:38:22.167936 systemd-logind[1203]: New session 25 of user core. May 14 00:38:22.168745 systemd[1]: Started session-25.scope. May 14 00:38:23.149392 kubelet[2019]: I0514 00:38:23.149344 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4cab8b26-d933-46c1-ad1f-02d67c3ea160-cilium-config-path\") pod \"4cab8b26-d933-46c1-ad1f-02d67c3ea160\" (UID: \"4cab8b26-d933-46c1-ad1f-02d67c3ea160\") " May 14 00:38:23.149392 kubelet[2019]: I0514 00:38:23.149385 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4cab8b26-d933-46c1-ad1f-02d67c3ea160-lib-modules\") pod \"4cab8b26-d933-46c1-ad1f-02d67c3ea160\" (UID: \"4cab8b26-d933-46c1-ad1f-02d67c3ea160\") " May 14 00:38:23.149392 kubelet[2019]: I0514 00:38:23.149404 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4cab8b26-d933-46c1-ad1f-02d67c3ea160-host-proc-sys-kernel\") pod \"4cab8b26-d933-46c1-ad1f-02d67c3ea160\" (UID: \"4cab8b26-d933-46c1-ad1f-02d67c3ea160\") " May 14 00:38:23.149793 kubelet[2019]: I0514 00:38:23.149419 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4cab8b26-d933-46c1-ad1f-02d67c3ea160-bpf-maps\") pod \"4cab8b26-d933-46c1-ad1f-02d67c3ea160\" (UID: \"4cab8b26-d933-46c1-ad1f-02d67c3ea160\") " May 14 00:38:23.149793 kubelet[2019]: I0514 00:38:23.149434 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4cab8b26-d933-46c1-ad1f-02d67c3ea160-xtables-lock\") pod \"4cab8b26-d933-46c1-ad1f-02d67c3ea160\" (UID: \"4cab8b26-d933-46c1-ad1f-02d67c3ea160\") " May 14 00:38:23.149793 kubelet[2019]: I0514 00:38:23.149451 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4cab8b26-d933-46c1-ad1f-02d67c3ea160-cilium-cgroup\") pod \"4cab8b26-d933-46c1-ad1f-02d67c3ea160\" (UID: \"4cab8b26-d933-46c1-ad1f-02d67c3ea160\") " May 14 00:38:23.149793 kubelet[2019]: I0514 00:38:23.149468 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4cab8b26-d933-46c1-ad1f-02d67c3ea160-etc-cni-netd\") pod \"4cab8b26-d933-46c1-ad1f-02d67c3ea160\" (UID: \"4cab8b26-d933-46c1-ad1f-02d67c3ea160\") " May 14 00:38:23.149793 kubelet[2019]: I0514 00:38:23.149486 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4cab8b26-d933-46c1-ad1f-02d67c3ea160-clustermesh-secrets\") pod \"4cab8b26-d933-46c1-ad1f-02d67c3ea160\" (UID: \"4cab8b26-d933-46c1-ad1f-02d67c3ea160\") " May 14 00:38:23.149793 kubelet[2019]: I0514 00:38:23.149504 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4cab8b26-d933-46c1-ad1f-02d67c3ea160-cilium-ipsec-secrets\") pod \"4cab8b26-d933-46c1-ad1f-02d67c3ea160\" (UID: \"4cab8b26-d933-46c1-ad1f-02d67c3ea160\") " May 14 00:38:23.149979 kubelet[2019]: I0514 00:38:23.149497 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cab8b26-d933-46c1-ad1f-02d67c3ea160-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4cab8b26-d933-46c1-ad1f-02d67c3ea160" (UID: "4cab8b26-d933-46c1-ad1f-02d67c3ea160"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:38:23.149979 kubelet[2019]: I0514 00:38:23.149520 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4cab8b26-d933-46c1-ad1f-02d67c3ea160-cilium-run\") pod \"4cab8b26-d933-46c1-ad1f-02d67c3ea160\" (UID: \"4cab8b26-d933-46c1-ad1f-02d67c3ea160\") " May 14 00:38:23.149979 kubelet[2019]: I0514 00:38:23.149538 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4cab8b26-d933-46c1-ad1f-02d67c3ea160-hubble-tls\") pod \"4cab8b26-d933-46c1-ad1f-02d67c3ea160\" (UID: \"4cab8b26-d933-46c1-ad1f-02d67c3ea160\") " May 14 00:38:23.149979 kubelet[2019]: I0514 00:38:23.149552 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4cab8b26-d933-46c1-ad1f-02d67c3ea160-cni-path\") pod \"4cab8b26-d933-46c1-ad1f-02d67c3ea160\" (UID: \"4cab8b26-d933-46c1-ad1f-02d67c3ea160\") " May 14 00:38:23.149979 kubelet[2019]: I0514 00:38:23.149565 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4cab8b26-d933-46c1-ad1f-02d67c3ea160-hostproc\") pod \"4cab8b26-d933-46c1-ad1f-02d67c3ea160\" (UID: \"4cab8b26-d933-46c1-ad1f-02d67c3ea160\") " May 14 00:38:23.149979 kubelet[2019]: I0514 00:38:23.149581 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4cab8b26-d933-46c1-ad1f-02d67c3ea160-host-proc-sys-net\") pod \"4cab8b26-d933-46c1-ad1f-02d67c3ea160\" (UID: \"4cab8b26-d933-46c1-ad1f-02d67c3ea160\") " May 14 00:38:23.150116 kubelet[2019]: I0514 00:38:23.149597 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm7z6\" (UniqueName: \"kubernetes.io/projected/4cab8b26-d933-46c1-ad1f-02d67c3ea160-kube-api-access-hm7z6\") pod \"4cab8b26-d933-46c1-ad1f-02d67c3ea160\" (UID: \"4cab8b26-d933-46c1-ad1f-02d67c3ea160\") " May 14 00:38:23.150116 kubelet[2019]: I0514 00:38:23.149625 2019 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4cab8b26-d933-46c1-ad1f-02d67c3ea160-lib-modules\") on node \"localhost\" DevicePath \"\"" May 14 00:38:23.151266 kubelet[2019]: I0514 00:38:23.151221 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cab8b26-d933-46c1-ad1f-02d67c3ea160-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4cab8b26-d933-46c1-ad1f-02d67c3ea160" (UID: "4cab8b26-d933-46c1-ad1f-02d67c3ea160"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 14 00:38:23.151266 kubelet[2019]: I0514 00:38:23.151271 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cab8b26-d933-46c1-ad1f-02d67c3ea160-hostproc" (OuterVolumeSpecName: "hostproc") pod "4cab8b26-d933-46c1-ad1f-02d67c3ea160" (UID: "4cab8b26-d933-46c1-ad1f-02d67c3ea160"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:38:23.151372 kubelet[2019]: I0514 00:38:23.151287 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cab8b26-d933-46c1-ad1f-02d67c3ea160-cni-path" (OuterVolumeSpecName: "cni-path") pod "4cab8b26-d933-46c1-ad1f-02d67c3ea160" (UID: "4cab8b26-d933-46c1-ad1f-02d67c3ea160"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:38:23.152773 kubelet[2019]: I0514 00:38:23.152735 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cab8b26-d933-46c1-ad1f-02d67c3ea160-kube-api-access-hm7z6" (OuterVolumeSpecName: "kube-api-access-hm7z6") pod "4cab8b26-d933-46c1-ad1f-02d67c3ea160" (UID: "4cab8b26-d933-46c1-ad1f-02d67c3ea160"). InnerVolumeSpecName "kube-api-access-hm7z6". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 00:38:23.152867 kubelet[2019]: I0514 00:38:23.152780 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cab8b26-d933-46c1-ad1f-02d67c3ea160-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4cab8b26-d933-46c1-ad1f-02d67c3ea160" (UID: "4cab8b26-d933-46c1-ad1f-02d67c3ea160"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:38:23.152867 kubelet[2019]: I0514 00:38:23.152799 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cab8b26-d933-46c1-ad1f-02d67c3ea160-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4cab8b26-d933-46c1-ad1f-02d67c3ea160" (UID: "4cab8b26-d933-46c1-ad1f-02d67c3ea160"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:38:23.152867 kubelet[2019]: I0514 00:38:23.152814 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cab8b26-d933-46c1-ad1f-02d67c3ea160-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4cab8b26-d933-46c1-ad1f-02d67c3ea160" (UID: "4cab8b26-d933-46c1-ad1f-02d67c3ea160"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:38:23.152867 kubelet[2019]: I0514 00:38:23.152827 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cab8b26-d933-46c1-ad1f-02d67c3ea160-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4cab8b26-d933-46c1-ad1f-02d67c3ea160" (UID: "4cab8b26-d933-46c1-ad1f-02d67c3ea160"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:38:23.152867 kubelet[2019]: I0514 00:38:23.152840 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cab8b26-d933-46c1-ad1f-02d67c3ea160-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4cab8b26-d933-46c1-ad1f-02d67c3ea160" (UID: "4cab8b26-d933-46c1-ad1f-02d67c3ea160"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:38:23.153027 kubelet[2019]: I0514 00:38:23.152866 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cab8b26-d933-46c1-ad1f-02d67c3ea160-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4cab8b26-d933-46c1-ad1f-02d67c3ea160" (UID: "4cab8b26-d933-46c1-ad1f-02d67c3ea160"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:38:23.153027 kubelet[2019]: I0514 00:38:23.152901 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cab8b26-d933-46c1-ad1f-02d67c3ea160-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4cab8b26-d933-46c1-ad1f-02d67c3ea160" (UID: "4cab8b26-d933-46c1-ad1f-02d67c3ea160"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:38:23.153159 kubelet[2019]: I0514 00:38:23.153134 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cab8b26-d933-46c1-ad1f-02d67c3ea160-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4cab8b26-d933-46c1-ad1f-02d67c3ea160" (UID: "4cab8b26-d933-46c1-ad1f-02d67c3ea160"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 00:38:23.153406 systemd[1]: var-lib-kubelet-pods-4cab8b26\x2dd933\x2d46c1\x2dad1f\x2d02d67c3ea160-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhm7z6.mount: Deactivated successfully. May 14 00:38:23.153495 systemd[1]: var-lib-kubelet-pods-4cab8b26\x2dd933\x2d46c1\x2dad1f\x2d02d67c3ea160-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 14 00:38:23.153547 systemd[1]: var-lib-kubelet-pods-4cab8b26\x2dd933\x2d46c1\x2dad1f\x2d02d67c3ea160-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 14 00:38:23.154919 kubelet[2019]: I0514 00:38:23.154868 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cab8b26-d933-46c1-ad1f-02d67c3ea160-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4cab8b26-d933-46c1-ad1f-02d67c3ea160" (UID: "4cab8b26-d933-46c1-ad1f-02d67c3ea160"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 14 00:38:23.155635 kubelet[2019]: I0514 00:38:23.155591 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cab8b26-d933-46c1-ad1f-02d67c3ea160-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "4cab8b26-d933-46c1-ad1f-02d67c3ea160" (UID: "4cab8b26-d933-46c1-ad1f-02d67c3ea160"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 14 00:38:23.157112 systemd[1]: var-lib-kubelet-pods-4cab8b26\x2dd933\x2d46c1\x2dad1f\x2d02d67c3ea160-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 14 00:38:23.250405 kubelet[2019]: I0514 00:38:23.250369 2019 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4cab8b26-d933-46c1-ad1f-02d67c3ea160-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 14 00:38:23.250405 kubelet[2019]: I0514 00:38:23.250398 2019 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4cab8b26-d933-46c1-ad1f-02d67c3ea160-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 14 00:38:23.250405 kubelet[2019]: I0514 00:38:23.250407 2019 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4cab8b26-d933-46c1-ad1f-02d67c3ea160-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 14 00:38:23.250405 kubelet[2019]: I0514 00:38:23.250415 2019 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4cab8b26-d933-46c1-ad1f-02d67c3ea160-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 14 00:38:23.250608 kubelet[2019]: I0514 00:38:23.250423 2019 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4cab8b26-d933-46c1-ad1f-02d67c3ea160-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 14 00:38:23.250608 kubelet[2019]: I0514 00:38:23.250431 2019 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4cab8b26-d933-46c1-ad1f-02d67c3ea160-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 14 00:38:23.250608 kubelet[2019]: I0514 00:38:23.250439 2019 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4cab8b26-d933-46c1-ad1f-02d67c3ea160-cilium-run\") on node \"localhost\" DevicePath \"\"" May 14 00:38:23.250608 kubelet[2019]: I0514 00:38:23.250448 2019 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4cab8b26-d933-46c1-ad1f-02d67c3ea160-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 14 00:38:23.250608 kubelet[2019]: I0514 00:38:23.250455 2019 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4cab8b26-d933-46c1-ad1f-02d67c3ea160-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" May 14 00:38:23.250608 kubelet[2019]: I0514 00:38:23.250462 2019 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4cab8b26-d933-46c1-ad1f-02d67c3ea160-cni-path\") on node \"localhost\" DevicePath \"\"" May 14 00:38:23.250608 kubelet[2019]: I0514 00:38:23.250469 2019 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4cab8b26-d933-46c1-ad1f-02d67c3ea160-hostproc\") on node \"localhost\" DevicePath \"\"" May 14 00:38:23.250608 kubelet[2019]: I0514 00:38:23.250476 2019 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4cab8b26-d933-46c1-ad1f-02d67c3ea160-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 14 00:38:23.250779 kubelet[2019]: I0514 00:38:23.250483 2019 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-hm7z6\" (UniqueName: \"kubernetes.io/projected/4cab8b26-d933-46c1-ad1f-02d67c3ea160-kube-api-access-hm7z6\") on node \"localhost\" DevicePath \"\"" May 14 00:38:23.250779 kubelet[2019]: I0514 00:38:23.250490 2019 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4cab8b26-d933-46c1-ad1f-02d67c3ea160-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 14 00:38:23.825959 kubelet[2019]: E0514 00:38:23.825927 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:38:23.832519 systemd[1]: Removed slice kubepods-burstable-pod4cab8b26_d933_46c1_ad1f_02d67c3ea160.slice. May 14 00:38:23.879270 kubelet[2019]: E0514 00:38:23.879238 2019 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 14 00:38:24.026912 kubelet[2019]: I0514 00:38:24.026299 2019 topology_manager.go:215] "Topology Admit Handler" podUID="c400f63b-cdb3-4c10-ae35-3ccbbc6f6255" podNamespace="kube-system" podName="cilium-2stbq" May 14 00:38:24.033307 systemd[1]: Created slice kubepods-burstable-podc400f63b_cdb3_4c10_ae35_3ccbbc6f6255.slice. May 14 00:38:24.156095 kubelet[2019]: I0514 00:38:24.155978 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c400f63b-cdb3-4c10-ae35-3ccbbc6f6255-host-proc-sys-kernel\") pod \"cilium-2stbq\" (UID: \"c400f63b-cdb3-4c10-ae35-3ccbbc6f6255\") " pod="kube-system/cilium-2stbq" May 14 00:38:24.156095 kubelet[2019]: I0514 00:38:24.156018 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c400f63b-cdb3-4c10-ae35-3ccbbc6f6255-cilium-ipsec-secrets\") pod \"cilium-2stbq\" (UID: \"c400f63b-cdb3-4c10-ae35-3ccbbc6f6255\") " pod="kube-system/cilium-2stbq" May 14 00:38:24.156095 kubelet[2019]: I0514 00:38:24.156045 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c400f63b-cdb3-4c10-ae35-3ccbbc6f6255-cni-path\") pod \"cilium-2stbq\" (UID: \"c400f63b-cdb3-4c10-ae35-3ccbbc6f6255\") " pod="kube-system/cilium-2stbq" May 14 00:38:24.156095 kubelet[2019]: I0514 00:38:24.156062 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c400f63b-cdb3-4c10-ae35-3ccbbc6f6255-etc-cni-netd\") pod \"cilium-2stbq\" (UID: \"c400f63b-cdb3-4c10-ae35-3ccbbc6f6255\") " pod="kube-system/cilium-2stbq" May 14 00:38:24.156095 kubelet[2019]: I0514 00:38:24.156077 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c400f63b-cdb3-4c10-ae35-3ccbbc6f6255-clustermesh-secrets\") pod \"cilium-2stbq\" (UID: \"c400f63b-cdb3-4c10-ae35-3ccbbc6f6255\") " pod="kube-system/cilium-2stbq" May 14 00:38:24.156095 kubelet[2019]: I0514 00:38:24.156099 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c400f63b-cdb3-4c10-ae35-3ccbbc6f6255-cilium-run\") pod \"cilium-2stbq\" (UID: \"c400f63b-cdb3-4c10-ae35-3ccbbc6f6255\") " pod="kube-system/cilium-2stbq" May 14 00:38:24.156535 kubelet[2019]: I0514 00:38:24.156114 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zkwd\" (UniqueName: \"kubernetes.io/projected/c400f63b-cdb3-4c10-ae35-3ccbbc6f6255-kube-api-access-4zkwd\") pod \"cilium-2stbq\" (UID: \"c400f63b-cdb3-4c10-ae35-3ccbbc6f6255\") " pod="kube-system/cilium-2stbq" May 14 00:38:24.156535 kubelet[2019]: I0514 00:38:24.156130 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c400f63b-cdb3-4c10-ae35-3ccbbc6f6255-hostproc\") pod \"cilium-2stbq\" (UID: \"c400f63b-cdb3-4c10-ae35-3ccbbc6f6255\") " pod="kube-system/cilium-2stbq" May 14 00:38:24.156535 kubelet[2019]: I0514 00:38:24.156145 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c400f63b-cdb3-4c10-ae35-3ccbbc6f6255-cilium-cgroup\") pod \"cilium-2stbq\" (UID: \"c400f63b-cdb3-4c10-ae35-3ccbbc6f6255\") " pod="kube-system/cilium-2stbq" May 14 00:38:24.156535 kubelet[2019]: I0514 00:38:24.156158 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c400f63b-cdb3-4c10-ae35-3ccbbc6f6255-bpf-maps\") pod \"cilium-2stbq\" (UID: \"c400f63b-cdb3-4c10-ae35-3ccbbc6f6255\") " pod="kube-system/cilium-2stbq" May 14 00:38:24.156535 kubelet[2019]: I0514 00:38:24.156175 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c400f63b-cdb3-4c10-ae35-3ccbbc6f6255-lib-modules\") pod \"cilium-2stbq\" (UID: \"c400f63b-cdb3-4c10-ae35-3ccbbc6f6255\") " pod="kube-system/cilium-2stbq" May 14 00:38:24.156535 kubelet[2019]: I0514 00:38:24.156215 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c400f63b-cdb3-4c10-ae35-3ccbbc6f6255-hubble-tls\") pod \"cilium-2stbq\" (UID: \"c400f63b-cdb3-4c10-ae35-3ccbbc6f6255\") " pod="kube-system/cilium-2stbq" May 14 00:38:24.156667 kubelet[2019]: I0514 00:38:24.156267 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c400f63b-cdb3-4c10-ae35-3ccbbc6f6255-host-proc-sys-net\") pod \"cilium-2stbq\" (UID: \"c400f63b-cdb3-4c10-ae35-3ccbbc6f6255\") " pod="kube-system/cilium-2stbq" May 14 00:38:24.156667 kubelet[2019]: I0514 00:38:24.156294 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c400f63b-cdb3-4c10-ae35-3ccbbc6f6255-cilium-config-path\") pod \"cilium-2stbq\" (UID: \"c400f63b-cdb3-4c10-ae35-3ccbbc6f6255\") " pod="kube-system/cilium-2stbq" May 14 00:38:24.156667 kubelet[2019]: I0514 00:38:24.156314 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c400f63b-cdb3-4c10-ae35-3ccbbc6f6255-xtables-lock\") pod \"cilium-2stbq\" (UID: \"c400f63b-cdb3-4c10-ae35-3ccbbc6f6255\") " pod="kube-system/cilium-2stbq" May 14 00:38:24.335366 kubelet[2019]: E0514 00:38:24.335320 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:38:24.336218 env[1216]: time="2025-05-14T00:38:24.335808835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2stbq,Uid:c400f63b-cdb3-4c10-ae35-3ccbbc6f6255,Namespace:kube-system,Attempt:0,}" May 14 00:38:24.346665 env[1216]: time="2025-05-14T00:38:24.346594755Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:38:24.346665 env[1216]: time="2025-05-14T00:38:24.346649436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:38:24.346665 env[1216]: time="2025-05-14T00:38:24.346660476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:38:24.346822 env[1216]: time="2025-05-14T00:38:24.346785678Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/98fd693ef05b32d55e332c8182eecbd4303b921137057c8e43e78cd117d2ad76 pid=3851 runtime=io.containerd.runc.v2 May 14 00:38:24.356472 systemd[1]: Started cri-containerd-98fd693ef05b32d55e332c8182eecbd4303b921137057c8e43e78cd117d2ad76.scope. May 14 00:38:24.389947 env[1216]: time="2025-05-14T00:38:24.389902158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2stbq,Uid:c400f63b-cdb3-4c10-ae35-3ccbbc6f6255,Namespace:kube-system,Attempt:0,} returns sandbox id \"98fd693ef05b32d55e332c8182eecbd4303b921137057c8e43e78cd117d2ad76\"" May 14 00:38:24.390573 kubelet[2019]: E0514 00:38:24.390546 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:38:24.393599 env[1216]: time="2025-05-14T00:38:24.393561532Z" level=info msg="CreateContainer within sandbox \"98fd693ef05b32d55e332c8182eecbd4303b921137057c8e43e78cd117d2ad76\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 14 00:38:24.403584 env[1216]: time="2025-05-14T00:38:24.403535160Z" level=info msg="CreateContainer within sandbox \"98fd693ef05b32d55e332c8182eecbd4303b921137057c8e43e78cd117d2ad76\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"65c37e654ac9ca41d7186fb0525997ff386ec3dafcd97e8ec88627d004b3c64f\"" May 14 00:38:24.403948 env[1216]: time="2025-05-14T00:38:24.403919686Z" level=info msg="StartContainer for \"65c37e654ac9ca41d7186fb0525997ff386ec3dafcd97e8ec88627d004b3c64f\"" May 14 00:38:24.419438 systemd[1]: Started cri-containerd-65c37e654ac9ca41d7186fb0525997ff386ec3dafcd97e8ec88627d004b3c64f.scope. May 14 00:38:24.450514 env[1216]: time="2025-05-14T00:38:24.450467137Z" level=info msg="StartContainer for \"65c37e654ac9ca41d7186fb0525997ff386ec3dafcd97e8ec88627d004b3c64f\" returns successfully" May 14 00:38:24.458185 systemd[1]: cri-containerd-65c37e654ac9ca41d7186fb0525997ff386ec3dafcd97e8ec88627d004b3c64f.scope: Deactivated successfully. May 14 00:38:24.483532 env[1216]: time="2025-05-14T00:38:24.483476267Z" level=info msg="shim disconnected" id=65c37e654ac9ca41d7186fb0525997ff386ec3dafcd97e8ec88627d004b3c64f May 14 00:38:24.483532 env[1216]: time="2025-05-14T00:38:24.483523028Z" level=warning msg="cleaning up after shim disconnected" id=65c37e654ac9ca41d7186fb0525997ff386ec3dafcd97e8ec88627d004b3c64f namespace=k8s.io May 14 00:38:24.483532 env[1216]: time="2025-05-14T00:38:24.483532788Z" level=info msg="cleaning up dead shim" May 14 00:38:24.490817 env[1216]: time="2025-05-14T00:38:24.490773495Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:38:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3937 runtime=io.containerd.runc.v2\n" May 14 00:38:24.999510 kubelet[2019]: E0514 00:38:24.999480 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:38:25.006123 env[1216]: time="2025-05-14T00:38:25.006072062Z" level=info msg="CreateContainer within sandbox \"98fd693ef05b32d55e332c8182eecbd4303b921137057c8e43e78cd117d2ad76\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 14 00:38:25.018957 env[1216]: time="2025-05-14T00:38:25.018907926Z" level=info msg="CreateContainer within sandbox \"98fd693ef05b32d55e332c8182eecbd4303b921137057c8e43e78cd117d2ad76\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"cb81bc9f4c8e711c0fd60cb2336bcf55fd22e5a29a87bb1e9e6e55cc482d072d\"" May 14 00:38:25.019453 env[1216]: time="2025-05-14T00:38:25.019430493Z" level=info msg="StartContainer for \"cb81bc9f4c8e711c0fd60cb2336bcf55fd22e5a29a87bb1e9e6e55cc482d072d\"" May 14 00:38:25.032794 systemd[1]: Started cri-containerd-cb81bc9f4c8e711c0fd60cb2336bcf55fd22e5a29a87bb1e9e6e55cc482d072d.scope. May 14 00:38:25.066220 env[1216]: time="2025-05-14T00:38:25.066171323Z" level=info msg="StartContainer for \"cb81bc9f4c8e711c0fd60cb2336bcf55fd22e5a29a87bb1e9e6e55cc482d072d\" returns successfully" May 14 00:38:25.069936 systemd[1]: cri-containerd-cb81bc9f4c8e711c0fd60cb2336bcf55fd22e5a29a87bb1e9e6e55cc482d072d.scope: Deactivated successfully. May 14 00:38:25.088127 env[1216]: time="2025-05-14T00:38:25.088082117Z" level=info msg="shim disconnected" id=cb81bc9f4c8e711c0fd60cb2336bcf55fd22e5a29a87bb1e9e6e55cc482d072d May 14 00:38:25.088344 env[1216]: time="2025-05-14T00:38:25.088325161Z" level=warning msg="cleaning up after shim disconnected" id=cb81bc9f4c8e711c0fd60cb2336bcf55fd22e5a29a87bb1e9e6e55cc482d072d namespace=k8s.io May 14 00:38:25.088420 env[1216]: time="2025-05-14T00:38:25.088407282Z" level=info msg="cleaning up dead shim" May 14 00:38:25.095251 env[1216]: time="2025-05-14T00:38:25.095212940Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:38:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3999 runtime=io.containerd.runc.v2\n" May 14 00:38:25.313416 kubelet[2019]: I0514 00:38:25.313365 2019 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-14T00:38:25Z","lastTransitionTime":"2025-05-14T00:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 14 00:38:25.828186 kubelet[2019]: I0514 00:38:25.828146 2019 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cab8b26-d933-46c1-ad1f-02d67c3ea160" path="/var/lib/kubelet/pods/4cab8b26-d933-46c1-ad1f-02d67c3ea160/volumes" May 14 00:38:26.002784 kubelet[2019]: E0514 00:38:26.002758 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:38:26.004869 env[1216]: time="2025-05-14T00:38:26.004830375Z" level=info msg="CreateContainer within sandbox \"98fd693ef05b32d55e332c8182eecbd4303b921137057c8e43e78cd117d2ad76\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 14 00:38:26.015703 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2391536386.mount: Deactivated successfully. May 14 00:38:26.018090 env[1216]: time="2025-05-14T00:38:26.018056358Z" level=info msg="CreateContainer within sandbox \"98fd693ef05b32d55e332c8182eecbd4303b921137057c8e43e78cd117d2ad76\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4b65b850cd6ec2990c31174717b4c8ef6fd7cbebc7d98c505fca45c776e4e3d2\"" May 14 00:38:26.018685 env[1216]: time="2025-05-14T00:38:26.018660486Z" level=info msg="StartContainer for \"4b65b850cd6ec2990c31174717b4c8ef6fd7cbebc7d98c505fca45c776e4e3d2\"" May 14 00:38:26.036815 systemd[1]: Started cri-containerd-4b65b850cd6ec2990c31174717b4c8ef6fd7cbebc7d98c505fca45c776e4e3d2.scope. May 14 00:38:26.064305 env[1216]: time="2025-05-14T00:38:26.064262917Z" level=info msg="StartContainer for \"4b65b850cd6ec2990c31174717b4c8ef6fd7cbebc7d98c505fca45c776e4e3d2\" returns successfully" May 14 00:38:26.067511 systemd[1]: cri-containerd-4b65b850cd6ec2990c31174717b4c8ef6fd7cbebc7d98c505fca45c776e4e3d2.scope: Deactivated successfully. May 14 00:38:26.088260 env[1216]: time="2025-05-14T00:38:26.088168848Z" level=info msg="shim disconnected" id=4b65b850cd6ec2990c31174717b4c8ef6fd7cbebc7d98c505fca45c776e4e3d2 May 14 00:38:26.088626 env[1216]: time="2025-05-14T00:38:26.088602654Z" level=warning msg="cleaning up after shim disconnected" id=4b65b850cd6ec2990c31174717b4c8ef6fd7cbebc7d98c505fca45c776e4e3d2 namespace=k8s.io May 14 00:38:26.088713 env[1216]: time="2025-05-14T00:38:26.088698375Z" level=info msg="cleaning up dead shim" May 14 00:38:26.094682 env[1216]: time="2025-05-14T00:38:26.094653298Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:38:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4056 runtime=io.containerd.runc.v2\n" May 14 00:38:26.261508 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4b65b850cd6ec2990c31174717b4c8ef6fd7cbebc7d98c505fca45c776e4e3d2-rootfs.mount: Deactivated successfully. May 14 00:38:27.005817 kubelet[2019]: E0514 00:38:27.005788 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:38:27.008039 env[1216]: time="2025-05-14T00:38:27.007992652Z" level=info msg="CreateContainer within sandbox \"98fd693ef05b32d55e332c8182eecbd4303b921137057c8e43e78cd117d2ad76\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 14 00:38:27.018930 env[1216]: time="2025-05-14T00:38:27.018861237Z" level=info msg="CreateContainer within sandbox \"98fd693ef05b32d55e332c8182eecbd4303b921137057c8e43e78cd117d2ad76\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"366768b02aaa0adacf486ac31dc8748dcde208bde56d1b60d7e05769499ee216\"" May 14 00:38:27.019341 env[1216]: time="2025-05-14T00:38:27.019314163Z" level=info msg="StartContainer for \"366768b02aaa0adacf486ac31dc8748dcde208bde56d1b60d7e05769499ee216\"" May 14 00:38:27.042116 systemd[1]: Started cri-containerd-366768b02aaa0adacf486ac31dc8748dcde208bde56d1b60d7e05769499ee216.scope. May 14 00:38:27.070165 env[1216]: time="2025-05-14T00:38:27.070124241Z" level=info msg="StartContainer for \"366768b02aaa0adacf486ac31dc8748dcde208bde56d1b60d7e05769499ee216\" returns successfully" May 14 00:38:27.072014 systemd[1]: cri-containerd-366768b02aaa0adacf486ac31dc8748dcde208bde56d1b60d7e05769499ee216.scope: Deactivated successfully. May 14 00:38:27.089783 env[1216]: time="2025-05-14T00:38:27.089738663Z" level=info msg="shim disconnected" id=366768b02aaa0adacf486ac31dc8748dcde208bde56d1b60d7e05769499ee216 May 14 00:38:27.090055 env[1216]: time="2025-05-14T00:38:27.090033867Z" level=warning msg="cleaning up after shim disconnected" id=366768b02aaa0adacf486ac31dc8748dcde208bde56d1b60d7e05769499ee216 namespace=k8s.io May 14 00:38:27.090136 env[1216]: time="2025-05-14T00:38:27.090121148Z" level=info msg="cleaning up dead shim" May 14 00:38:27.097352 env[1216]: time="2025-05-14T00:38:27.097318645Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:38:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4112 runtime=io.containerd.runc.v2\n" May 14 00:38:27.261520 systemd[1]: run-containerd-runc-k8s.io-366768b02aaa0adacf486ac31dc8748dcde208bde56d1b60d7e05769499ee216-runc.oBa7Fb.mount: Deactivated successfully. May 14 00:38:27.261627 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-366768b02aaa0adacf486ac31dc8748dcde208bde56d1b60d7e05769499ee216-rootfs.mount: Deactivated successfully. May 14 00:38:28.010619 kubelet[2019]: E0514 00:38:28.010573 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:38:28.014207 env[1216]: time="2025-05-14T00:38:28.014169043Z" level=info msg="CreateContainer within sandbox \"98fd693ef05b32d55e332c8182eecbd4303b921137057c8e43e78cd117d2ad76\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 14 00:38:28.027082 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount852734296.mount: Deactivated successfully. May 14 00:38:28.036296 env[1216]: time="2025-05-14T00:38:28.036246487Z" level=info msg="CreateContainer within sandbox \"98fd693ef05b32d55e332c8182eecbd4303b921137057c8e43e78cd117d2ad76\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5fdadfea1f5be1446f94061032ae4332e729481b81053341c6ece89e5aa6816f\"" May 14 00:38:28.037943 env[1216]: time="2025-05-14T00:38:28.037910229Z" level=info msg="StartContainer for \"5fdadfea1f5be1446f94061032ae4332e729481b81053341c6ece89e5aa6816f\"" May 14 00:38:28.053867 systemd[1]: Started cri-containerd-5fdadfea1f5be1446f94061032ae4332e729481b81053341c6ece89e5aa6816f.scope. May 14 00:38:28.092772 env[1216]: time="2025-05-14T00:38:28.092724135Z" level=info msg="StartContainer for \"5fdadfea1f5be1446f94061032ae4332e729481b81053341c6ece89e5aa6816f\" returns successfully" May 14 00:38:28.384974 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) May 14 00:38:29.015702 kubelet[2019]: E0514 00:38:29.015410 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:38:29.029375 kubelet[2019]: I0514 00:38:29.029309 2019 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2stbq" podStartSLOduration=5.029292594 podStartE2EDuration="5.029292594s" podCreationTimestamp="2025-05-14 00:38:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:38:29.028526385 +0000 UTC m=+85.280207472" watchObservedRunningTime="2025-05-14 00:38:29.029292594 +0000 UTC m=+85.280973681" May 14 00:38:30.336975 kubelet[2019]: E0514 00:38:30.336943 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:38:30.423248 systemd[1]: run-containerd-runc-k8s.io-5fdadfea1f5be1446f94061032ae4332e729481b81053341c6ece89e5aa6816f-runc.aXXZMB.mount: Deactivated successfully. May 14 00:38:31.201934 systemd-networkd[1055]: lxc_health: Link UP May 14 00:38:31.217005 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 14 00:38:31.216931 systemd-networkd[1055]: lxc_health: Gained carrier May 14 00:38:32.337432 kubelet[2019]: E0514 00:38:32.337402 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:38:32.493175 systemd-networkd[1055]: lxc_health: Gained IPv6LL May 14 00:38:32.826550 kubelet[2019]: E0514 00:38:32.826508 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:38:33.021740 kubelet[2019]: E0514 00:38:33.021714 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:38:34.825917 kubelet[2019]: E0514 00:38:34.825852 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:38:36.861200 systemd[1]: run-containerd-runc-k8s.io-5fdadfea1f5be1446f94061032ae4332e729481b81053341c6ece89e5aa6816f-runc.cct6HG.mount: Deactivated successfully. May 14 00:38:36.916803 sshd[3820]: pam_unix(sshd:session): session closed for user core May 14 00:38:36.919602 systemd[1]: sshd@24-10.0.0.47:22-10.0.0.1:36528.service: Deactivated successfully. May 14 00:38:36.920318 systemd[1]: session-25.scope: Deactivated successfully. May 14 00:38:36.920780 systemd-logind[1203]: Session 25 logged out. Waiting for processes to exit. May 14 00:38:36.921731 systemd-logind[1203]: Removed session 25.