May 17 09:59:36.797280 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 17 09:59:36.797302 kernel: Linux version 6.12.20-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Sat May 17 08:42:01 -00 2025 May 17 09:59:36.797312 kernel: KASLR enabled May 17 09:59:36.797317 kernel: efi: EFI v2.7 by EDK II May 17 09:59:36.797323 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 May 17 09:59:36.797328 kernel: random: crng init done May 17 09:59:36.797335 kernel: secureboot: Secure boot disabled May 17 09:59:36.797340 kernel: ACPI: Early table checksum verification disabled May 17 09:59:36.797346 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) May 17 09:59:36.797353 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 17 09:59:36.797359 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 17 09:59:36.797364 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 17 09:59:36.797370 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 17 09:59:36.797376 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 17 09:59:36.797383 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 17 09:59:36.797390 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 09:59:36.797396 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 17 09:59:36.797402 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 17 09:59:36.797408 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 17 09:59:36.797414 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 17 09:59:36.797419 kernel: ACPI: Use ACPI SPCR as default console: Yes May 17 09:59:36.797425 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 17 09:59:36.797431 kernel: NODE_DATA(0) allocated [mem 0xdc965dc0-0xdc96cfff] May 17 09:59:36.797437 kernel: Zone ranges: May 17 09:59:36.797443 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 17 09:59:36.797450 kernel: DMA32 empty May 17 09:59:36.797456 kernel: Normal empty May 17 09:59:36.797462 kernel: Device empty May 17 09:59:36.797468 kernel: Movable zone start for each node May 17 09:59:36.797474 kernel: Early memory node ranges May 17 09:59:36.797480 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] May 17 09:59:36.797486 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] May 17 09:59:36.797508 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] May 17 09:59:36.797515 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] May 17 09:59:36.797521 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] May 17 09:59:36.797527 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] May 17 09:59:36.797533 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] May 17 09:59:36.797541 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] May 17 09:59:36.797547 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] May 17 09:59:36.797553 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 17 09:59:36.797562 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 17 09:59:36.797568 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 17 09:59:36.797575 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 17 09:59:36.797583 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 17 09:59:36.797589 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 17 09:59:36.797595 kernel: psci: probing for conduit method from ACPI. May 17 09:59:36.797602 kernel: psci: PSCIv1.1 detected in firmware. May 17 09:59:36.797608 kernel: psci: Using standard PSCI v0.2 function IDs May 17 09:59:36.797614 kernel: psci: Trusted OS migration not required May 17 09:59:36.797621 kernel: psci: SMC Calling Convention v1.1 May 17 09:59:36.797627 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 17 09:59:36.797633 kernel: percpu: Embedded 33 pages/cpu s98136 r8192 d28840 u135168 May 17 09:59:36.797640 kernel: pcpu-alloc: s98136 r8192 d28840 u135168 alloc=33*4096 May 17 09:59:36.797648 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 17 09:59:36.797654 kernel: Detected PIPT I-cache on CPU0 May 17 09:59:36.797660 kernel: CPU features: detected: GIC system register CPU interface May 17 09:59:36.797667 kernel: CPU features: detected: Spectre-v4 May 17 09:59:36.797673 kernel: CPU features: detected: Spectre-BHB May 17 09:59:36.797679 kernel: CPU features: kernel page table isolation forced ON by KASLR May 17 09:59:36.797686 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 17 09:59:36.797692 kernel: CPU features: detected: ARM erratum 1418040 May 17 09:59:36.797698 kernel: CPU features: detected: SSBS not fully self-synchronizing May 17 09:59:36.797705 kernel: alternatives: applying boot alternatives May 17 09:59:36.797712 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=a72c061e2aa335746dc4ceac58c43e3318237560b467544993aaee87c0602b03 May 17 09:59:36.797724 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 09:59:36.797732 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 17 09:59:36.797738 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 09:59:36.797744 kernel: Fallback order for Node 0: 0 May 17 09:59:36.797751 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 May 17 09:59:36.797757 kernel: Policy zone: DMA May 17 09:59:36.797763 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 09:59:36.797770 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB May 17 09:59:36.797776 kernel: software IO TLB: area num 4. May 17 09:59:36.797782 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB May 17 09:59:36.797789 kernel: software IO TLB: mapped [mem 0x00000000d8c00000-0x00000000d9000000] (4MB) May 17 09:59:36.797795 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 17 09:59:36.797803 kernel: rcu: Preemptible hierarchical RCU implementation. May 17 09:59:36.797810 kernel: rcu: RCU event tracing is enabled. May 17 09:59:36.797817 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 17 09:59:36.797823 kernel: Trampoline variant of Tasks RCU enabled. May 17 09:59:36.797830 kernel: Tracing variant of Tasks RCU enabled. May 17 09:59:36.797836 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 09:59:36.797843 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 17 09:59:36.797849 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 17 09:59:36.797856 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 17 09:59:36.797862 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 17 09:59:36.797869 kernel: GICv3: 256 SPIs implemented May 17 09:59:36.797876 kernel: GICv3: 0 Extended SPIs implemented May 17 09:59:36.797883 kernel: Root IRQ handler: gic_handle_irq May 17 09:59:36.797889 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 17 09:59:36.797895 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 May 17 09:59:36.797902 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 17 09:59:36.797908 kernel: ITS [mem 0x08080000-0x0809ffff] May 17 09:59:36.797915 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400e0000 (indirect, esz 8, psz 64K, shr 1) May 17 09:59:36.797921 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400f0000 (flat, esz 8, psz 64K, shr 1) May 17 09:59:36.797928 kernel: GICv3: using LPI property table @0x0000000040100000 May 17 09:59:36.797934 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040110000 May 17 09:59:36.797941 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 17 09:59:36.797947 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 09:59:36.797954 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 17 09:59:36.797961 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 17 09:59:36.797968 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 17 09:59:36.797974 kernel: arm-pv: using stolen time PV May 17 09:59:36.797981 kernel: Console: colour dummy device 80x25 May 17 09:59:36.797987 kernel: ACPI: Core revision 20240827 May 17 09:59:36.797994 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 17 09:59:36.798001 kernel: pid_max: default: 32768 minimum: 301 May 17 09:59:36.798007 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 17 09:59:36.798015 kernel: landlock: Up and running. May 17 09:59:36.798022 kernel: SELinux: Initializing. May 17 09:59:36.798028 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 09:59:36.798035 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 09:59:36.798041 kernel: rcu: Hierarchical SRCU implementation. May 17 09:59:36.798048 kernel: rcu: Max phase no-delay instances is 400. May 17 09:59:36.798055 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 17 09:59:36.798061 kernel: Remapping and enabling EFI services. May 17 09:59:36.798068 kernel: smp: Bringing up secondary CPUs ... May 17 09:59:36.798074 kernel: Detected PIPT I-cache on CPU1 May 17 09:59:36.798086 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 17 09:59:36.798093 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040120000 May 17 09:59:36.798101 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 09:59:36.798108 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 17 09:59:36.798115 kernel: Detected PIPT I-cache on CPU2 May 17 09:59:36.798122 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 17 09:59:36.798129 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040130000 May 17 09:59:36.798137 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 09:59:36.798144 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 17 09:59:36.798151 kernel: Detected PIPT I-cache on CPU3 May 17 09:59:36.798157 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 17 09:59:36.798164 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040140000 May 17 09:59:36.798171 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 09:59:36.798178 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 17 09:59:36.798185 kernel: smp: Brought up 1 node, 4 CPUs May 17 09:59:36.798191 kernel: SMP: Total of 4 processors activated. May 17 09:59:36.798198 kernel: CPU: All CPU(s) started at EL1 May 17 09:59:36.798206 kernel: CPU features: detected: 32-bit EL0 Support May 17 09:59:36.798226 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 17 09:59:36.798233 kernel: CPU features: detected: Common not Private translations May 17 09:59:36.798240 kernel: CPU features: detected: CRC32 instructions May 17 09:59:36.798246 kernel: CPU features: detected: Enhanced Virtualization Traps May 17 09:59:36.798253 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 17 09:59:36.798260 kernel: CPU features: detected: LSE atomic instructions May 17 09:59:36.798272 kernel: CPU features: detected: Privileged Access Never May 17 09:59:36.798279 kernel: CPU features: detected: RAS Extension Support May 17 09:59:36.798288 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 17 09:59:36.798295 kernel: alternatives: applying system-wide alternatives May 17 09:59:36.798302 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 May 17 09:59:36.798310 kernel: Memory: 2440984K/2572288K available (11072K kernel code, 2276K rwdata, 8928K rodata, 39424K init, 1034K bss, 125536K reserved, 0K cma-reserved) May 17 09:59:36.798317 kernel: devtmpfs: initialized May 17 09:59:36.798324 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 09:59:36.798332 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 17 09:59:36.798338 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 17 09:59:36.798345 kernel: 0 pages in range for non-PLT usage May 17 09:59:36.798353 kernel: 508544 pages in range for PLT usage May 17 09:59:36.798360 kernel: pinctrl core: initialized pinctrl subsystem May 17 09:59:36.798367 kernel: SMBIOS 3.0.0 present. May 17 09:59:36.798374 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 17 09:59:36.798380 kernel: DMI: Memory slots populated: 1/1 May 17 09:59:36.798387 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 09:59:36.798394 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 17 09:59:36.798401 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 17 09:59:36.798408 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 17 09:59:36.798416 kernel: audit: initializing netlink subsys (disabled) May 17 09:59:36.798423 kernel: audit: type=2000 audit(0.029:1): state=initialized audit_enabled=0 res=1 May 17 09:59:36.798430 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 09:59:36.798437 kernel: cpuidle: using governor menu May 17 09:59:36.798444 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 17 09:59:36.798451 kernel: ASID allocator initialised with 32768 entries May 17 09:59:36.798458 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 09:59:36.798465 kernel: Serial: AMBA PL011 UART driver May 17 09:59:36.798471 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 17 09:59:36.798480 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 17 09:59:36.798566 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 17 09:59:36.798577 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 17 09:59:36.798585 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 17 09:59:36.798591 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 17 09:59:36.798598 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 17 09:59:36.798605 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 17 09:59:36.798612 kernel: ACPI: Added _OSI(Module Device) May 17 09:59:36.798619 kernel: ACPI: Added _OSI(Processor Device) May 17 09:59:36.798629 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 09:59:36.798635 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 09:59:36.798643 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 17 09:59:36.798649 kernel: ACPI: Interpreter enabled May 17 09:59:36.798656 kernel: ACPI: Using GIC for interrupt routing May 17 09:59:36.798663 kernel: ACPI: MCFG table detected, 1 entries May 17 09:59:36.798670 kernel: ACPI: CPU0 has been hot-added May 17 09:59:36.798677 kernel: ACPI: CPU1 has been hot-added May 17 09:59:36.798684 kernel: ACPI: CPU2 has been hot-added May 17 09:59:36.798691 kernel: ACPI: CPU3 has been hot-added May 17 09:59:36.798699 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 17 09:59:36.798706 kernel: printk: legacy console [ttyAMA0] enabled May 17 09:59:36.798713 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 17 09:59:36.798838 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 09:59:36.798905 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 17 09:59:36.798966 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 17 09:59:36.799024 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 17 09:59:36.799083 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 17 09:59:36.799092 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 17 09:59:36.799099 kernel: PCI host bridge to bus 0000:00 May 17 09:59:36.799182 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 17 09:59:36.799242 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 17 09:59:36.799307 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 17 09:59:36.799361 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 17 09:59:36.799436 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint May 17 09:59:36.799560 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 17 09:59:36.799626 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] May 17 09:59:36.799689 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] May 17 09:59:36.799747 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] May 17 09:59:36.799805 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned May 17 09:59:36.799862 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned May 17 09:59:36.799923 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned May 17 09:59:36.799977 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 17 09:59:36.800028 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 17 09:59:36.800079 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 17 09:59:36.800088 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 17 09:59:36.800095 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 17 09:59:36.800102 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 17 09:59:36.800110 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 17 09:59:36.800117 kernel: iommu: Default domain type: Translated May 17 09:59:36.800125 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 17 09:59:36.800131 kernel: efivars: Registered efivars operations May 17 09:59:36.800138 kernel: vgaarb: loaded May 17 09:59:36.800145 kernel: clocksource: Switched to clocksource arch_sys_counter May 17 09:59:36.800152 kernel: VFS: Disk quotas dquot_6.6.0 May 17 09:59:36.800159 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 09:59:36.800166 kernel: pnp: PnP ACPI init May 17 09:59:36.800234 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 17 09:59:36.800245 kernel: pnp: PnP ACPI: found 1 devices May 17 09:59:36.800251 kernel: NET: Registered PF_INET protocol family May 17 09:59:36.800258 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 17 09:59:36.800270 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 17 09:59:36.800279 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 09:59:36.800286 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 17 09:59:36.800293 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 17 09:59:36.800302 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 17 09:59:36.800309 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 09:59:36.800316 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 09:59:36.800323 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 09:59:36.800330 kernel: PCI: CLS 0 bytes, default 64 May 17 09:59:36.800337 kernel: kvm [1]: HYP mode not available May 17 09:59:36.800344 kernel: Initialise system trusted keyrings May 17 09:59:36.800351 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 17 09:59:36.800357 kernel: Key type asymmetric registered May 17 09:59:36.800365 kernel: Asymmetric key parser 'x509' registered May 17 09:59:36.800372 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 17 09:59:36.800379 kernel: io scheduler mq-deadline registered May 17 09:59:36.800386 kernel: io scheduler kyber registered May 17 09:59:36.800393 kernel: io scheduler bfq registered May 17 09:59:36.800400 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 17 09:59:36.800407 kernel: ACPI: button: Power Button [PWRB] May 17 09:59:36.800414 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 17 09:59:36.800479 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 17 09:59:36.800501 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 09:59:36.800508 kernel: thunder_xcv, ver 1.0 May 17 09:59:36.800515 kernel: thunder_bgx, ver 1.0 May 17 09:59:36.800522 kernel: nicpf, ver 1.0 May 17 09:59:36.800529 kernel: nicvf, ver 1.0 May 17 09:59:36.800609 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 17 09:59:36.800667 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-17T09:59:36 UTC (1747475976) May 17 09:59:36.800676 kernel: hid: raw HID events driver (C) Jiri Kosina May 17 09:59:36.800685 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available May 17 09:59:36.800692 kernel: watchdog: NMI not fully supported May 17 09:59:36.800699 kernel: watchdog: Hard watchdog permanently disabled May 17 09:59:36.800706 kernel: NET: Registered PF_INET6 protocol family May 17 09:59:36.800713 kernel: Segment Routing with IPv6 May 17 09:59:36.800720 kernel: In-situ OAM (IOAM) with IPv6 May 17 09:59:36.800727 kernel: NET: Registered PF_PACKET protocol family May 17 09:59:36.800733 kernel: Key type dns_resolver registered May 17 09:59:36.800740 kernel: registered taskstats version 1 May 17 09:59:36.800747 kernel: Loading compiled-in X.509 certificates May 17 09:59:36.800755 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.20-flatcar: 4af015013aa6db6a56e62bf75200b0b56da7d2c1' May 17 09:59:36.800762 kernel: Demotion targets for Node 0: null May 17 09:59:36.800769 kernel: Key type .fscrypt registered May 17 09:59:36.800775 kernel: Key type fscrypt-provisioning registered May 17 09:59:36.800782 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 09:59:36.800789 kernel: ima: Allocated hash algorithm: sha1 May 17 09:59:36.800796 kernel: ima: No architecture policies found May 17 09:59:36.800803 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 17 09:59:36.800811 kernel: clk: Disabling unused clocks May 17 09:59:36.800818 kernel: PM: genpd: Disabling unused power domains May 17 09:59:36.800825 kernel: Warning: unable to open an initial console. May 17 09:59:36.800832 kernel: Freeing unused kernel memory: 39424K May 17 09:59:36.800838 kernel: Run /init as init process May 17 09:59:36.800845 kernel: with arguments: May 17 09:59:36.800852 kernel: /init May 17 09:59:36.800859 kernel: with environment: May 17 09:59:36.800865 kernel: HOME=/ May 17 09:59:36.800873 kernel: TERM=linux May 17 09:59:36.800880 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 09:59:36.800888 systemd[1]: Successfully made /usr/ read-only. May 17 09:59:36.800897 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 17 09:59:36.800905 systemd[1]: Detected virtualization kvm. May 17 09:59:36.800912 systemd[1]: Detected architecture arm64. May 17 09:59:36.800919 systemd[1]: Running in initrd. May 17 09:59:36.800926 systemd[1]: No hostname configured, using default hostname. May 17 09:59:36.800935 systemd[1]: Hostname set to . May 17 09:59:36.800942 systemd[1]: Initializing machine ID from VM UUID. May 17 09:59:36.800949 systemd[1]: Queued start job for default target initrd.target. May 17 09:59:36.800957 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 09:59:36.800964 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 09:59:36.800972 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 17 09:59:36.800980 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 09:59:36.800987 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 17 09:59:36.800996 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 17 09:59:36.801005 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 17 09:59:36.801012 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 17 09:59:36.801020 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 09:59:36.801027 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 09:59:36.801034 systemd[1]: Reached target paths.target - Path Units. May 17 09:59:36.801043 systemd[1]: Reached target slices.target - Slice Units. May 17 09:59:36.801050 systemd[1]: Reached target swap.target - Swaps. May 17 09:59:36.801058 systemd[1]: Reached target timers.target - Timer Units. May 17 09:59:36.801065 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 17 09:59:36.801073 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 09:59:36.801080 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 17 09:59:36.801087 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 17 09:59:36.801095 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 09:59:36.801102 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 09:59:36.801111 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 09:59:36.801118 systemd[1]: Reached target sockets.target - Socket Units. May 17 09:59:36.801125 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 17 09:59:36.801133 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 09:59:36.801140 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 17 09:59:36.801148 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 17 09:59:36.801156 systemd[1]: Starting systemd-fsck-usr.service... May 17 09:59:36.801163 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 09:59:36.801171 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 09:59:36.801178 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 09:59:36.801186 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 17 09:59:36.801193 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 09:59:36.801201 systemd[1]: Finished systemd-fsck-usr.service. May 17 09:59:36.801210 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 09:59:36.801231 systemd-journald[243]: Collecting audit messages is disabled. May 17 09:59:36.801250 systemd-journald[243]: Journal started May 17 09:59:36.801280 systemd-journald[243]: Runtime Journal (/run/log/journal/26bcbc84c6814c7b86140654e08854dd) is 6M, max 48.5M, 42.4M free. May 17 09:59:36.792086 systemd-modules-load[245]: Inserted module 'overlay' May 17 09:59:36.806169 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 09:59:36.808148 systemd[1]: Started systemd-journald.service - Journal Service. May 17 09:59:36.808169 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 09:59:36.812534 kernel: Bridge firewalling registered May 17 09:59:36.811870 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 09:59:36.811872 systemd-modules-load[245]: Inserted module 'br_netfilter' May 17 09:59:36.813621 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 09:59:36.815249 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 09:59:36.819601 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 09:59:36.822182 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 09:59:36.823694 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 09:59:36.824978 systemd-tmpfiles[264]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 17 09:59:36.828876 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 09:59:36.837035 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 09:59:36.840416 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 09:59:36.841818 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 09:59:36.844441 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 17 09:59:36.846832 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 09:59:36.873997 dracut-cmdline[287]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=a72c061e2aa335746dc4ceac58c43e3318237560b467544993aaee87c0602b03 May 17 09:59:36.889688 systemd-resolved[288]: Positive Trust Anchors: May 17 09:59:36.889705 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 09:59:36.889735 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 09:59:36.894452 systemd-resolved[288]: Defaulting to hostname 'linux'. May 17 09:59:36.895383 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 09:59:36.899157 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 09:59:36.947524 kernel: SCSI subsystem initialized May 17 09:59:36.953519 kernel: Loading iSCSI transport class v2.0-870. May 17 09:59:36.961530 kernel: iscsi: registered transport (tcp) May 17 09:59:36.973510 kernel: iscsi: registered transport (qla4xxx) May 17 09:59:36.973533 kernel: QLogic iSCSI HBA Driver May 17 09:59:36.991919 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 17 09:59:37.005525 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 17 09:59:37.007589 systemd[1]: Reached target network-pre.target - Preparation for Network. May 17 09:59:37.051186 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 17 09:59:37.054214 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 17 09:59:37.118516 kernel: raid6: neonx8 gen() 15739 MB/s May 17 09:59:37.135509 kernel: raid6: neonx4 gen() 15791 MB/s May 17 09:59:37.152519 kernel: raid6: neonx2 gen() 13242 MB/s May 17 09:59:37.169506 kernel: raid6: neonx1 gen() 10450 MB/s May 17 09:59:37.186514 kernel: raid6: int64x8 gen() 6889 MB/s May 17 09:59:37.203512 kernel: raid6: int64x4 gen() 7350 MB/s May 17 09:59:37.220506 kernel: raid6: int64x2 gen() 6096 MB/s May 17 09:59:37.237512 kernel: raid6: int64x1 gen() 5040 MB/s May 17 09:59:37.237536 kernel: raid6: using algorithm neonx4 gen() 15791 MB/s May 17 09:59:37.254513 kernel: raid6: .... xor() 12290 MB/s, rmw enabled May 17 09:59:37.254529 kernel: raid6: using neon recovery algorithm May 17 09:59:37.259509 kernel: xor: measuring software checksum speed May 17 09:59:37.259527 kernel: 8regs : 21653 MB/sec May 17 09:59:37.260886 kernel: 32regs : 19580 MB/sec May 17 09:59:37.260911 kernel: arm64_neon : 28022 MB/sec May 17 09:59:37.260928 kernel: xor: using function: arm64_neon (28022 MB/sec) May 17 09:59:37.315516 kernel: Btrfs loaded, zoned=no, fsverity=no May 17 09:59:37.321726 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 17 09:59:37.324185 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 09:59:37.352388 systemd-udevd[499]: Using default interface naming scheme 'v255'. May 17 09:59:37.356393 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 09:59:37.358652 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 17 09:59:37.381955 dracut-pre-trigger[508]: rd.md=0: removing MD RAID activation May 17 09:59:37.403442 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 17 09:59:37.405772 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 09:59:37.458115 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 09:59:37.461238 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 17 09:59:37.501724 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 17 09:59:37.506668 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 17 09:59:37.506764 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 09:59:37.506775 kernel: GPT:9289727 != 19775487 May 17 09:59:37.506783 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 09:59:37.506792 kernel: GPT:9289727 != 19775487 May 17 09:59:37.506800 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 09:59:37.506808 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 09:59:37.509236 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 09:59:37.509365 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 09:59:37.512718 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 17 09:59:37.514726 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 09:59:37.542522 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 17 09:59:37.544884 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 09:59:37.551889 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 17 09:59:37.559437 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 17 09:59:37.567514 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 17 09:59:37.577994 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 17 09:59:37.579226 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 17 09:59:37.581554 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 17 09:59:37.584544 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 09:59:37.586630 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 09:59:37.589300 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 17 09:59:37.591140 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 17 09:59:37.604099 disk-uuid[594]: Primary Header is updated. May 17 09:59:37.604099 disk-uuid[594]: Secondary Entries is updated. May 17 09:59:37.604099 disk-uuid[594]: Secondary Header is updated. May 17 09:59:37.609523 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 09:59:37.610892 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 17 09:59:38.617549 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 09:59:38.620172 disk-uuid[599]: The operation has completed successfully. May 17 09:59:38.639538 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 09:59:38.639638 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 17 09:59:38.670178 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 17 09:59:38.695523 sh[614]: Success May 17 09:59:38.710613 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 09:59:38.712063 kernel: device-mapper: uevent: version 1.0.3 May 17 09:59:38.712089 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 17 09:59:38.723503 kernel: device-mapper: verity: sha256 using shash "sha256-ce" May 17 09:59:38.750740 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 17 09:59:38.752535 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 17 09:59:38.760701 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 17 09:59:38.766894 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 17 09:59:38.766922 kernel: BTRFS: device fsid 0ae13f0b-fc5b-4c2a-bb28-06d635b10baa devid 1 transid 42 /dev/mapper/usr (253:0) scanned by mount (626) May 17 09:59:38.767992 kernel: BTRFS info (device dm-0): first mount of filesystem 0ae13f0b-fc5b-4c2a-bb28-06d635b10baa May 17 09:59:38.768018 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 17 09:59:38.769503 kernel: BTRFS info (device dm-0): using free-space-tree May 17 09:59:38.772805 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 17 09:59:38.773892 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 17 09:59:38.775483 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 17 09:59:38.776266 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 17 09:59:38.777922 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 17 09:59:38.802540 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (658) May 17 09:59:38.805002 kernel: BTRFS info (device vda6): first mount of filesystem f89a5612-7786-479a-a46f-af205a06b6f7 May 17 09:59:38.805037 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 17 09:59:38.805048 kernel: BTRFS info (device vda6): using free-space-tree May 17 09:59:38.811534 kernel: BTRFS info (device vda6): last unmount of filesystem f89a5612-7786-479a-a46f-af205a06b6f7 May 17 09:59:38.812984 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 17 09:59:38.815980 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 17 09:59:38.892856 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 09:59:38.895880 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 09:59:38.937387 systemd-networkd[799]: lo: Link UP May 17 09:59:38.938227 systemd-networkd[799]: lo: Gained carrier May 17 09:59:38.939008 systemd-networkd[799]: Enumeration completed May 17 09:59:38.939132 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 09:59:38.940278 systemd[1]: Reached target network.target - Network. May 17 09:59:38.940531 systemd-networkd[799]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 09:59:38.940534 systemd-networkd[799]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 09:59:38.941079 systemd-networkd[799]: eth0: Link UP May 17 09:59:38.941081 systemd-networkd[799]: eth0: Gained carrier May 17 09:59:38.941089 systemd-networkd[799]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 09:59:38.959218 ignition[706]: Ignition 2.21.0 May 17 09:59:38.959231 ignition[706]: Stage: fetch-offline May 17 09:59:38.959271 ignition[706]: no configs at "/usr/lib/ignition/base.d" May 17 09:59:38.959279 ignition[706]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 09:59:38.959464 ignition[706]: parsed url from cmdline: "" May 17 09:59:38.959467 ignition[706]: no config URL provided May 17 09:59:38.959504 ignition[706]: reading system config file "/usr/lib/ignition/user.ign" May 17 09:59:38.962574 systemd-networkd[799]: eth0: DHCPv4 address 10.0.0.72/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 17 09:59:38.959512 ignition[706]: no config at "/usr/lib/ignition/user.ign" May 17 09:59:38.959533 ignition[706]: op(1): [started] loading QEMU firmware config module May 17 09:59:38.959537 ignition[706]: op(1): executing: "modprobe" "qemu_fw_cfg" May 17 09:59:38.968629 ignition[706]: op(1): [finished] loading QEMU firmware config module May 17 09:59:39.005041 ignition[706]: parsing config with SHA512: 3bd7ee0a0282fd8f7a92e66c26f55e553bd0231c963bfbe93a81f6260c8a10be972ec25f3d37fc608bee42990655c88c4c1617bf7703d283e8f9b73d74a3d628 May 17 09:59:39.009067 unknown[706]: fetched base config from "system" May 17 09:59:39.009081 unknown[706]: fetched user config from "qemu" May 17 09:59:39.009484 ignition[706]: fetch-offline: fetch-offline passed May 17 09:59:39.009556 ignition[706]: Ignition finished successfully May 17 09:59:39.011363 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 17 09:59:39.012977 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 17 09:59:39.013830 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 17 09:59:39.046024 ignition[813]: Ignition 2.21.0 May 17 09:59:39.046042 ignition[813]: Stage: kargs May 17 09:59:39.046251 ignition[813]: no configs at "/usr/lib/ignition/base.d" May 17 09:59:39.046272 ignition[813]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 09:59:39.047546 ignition[813]: kargs: kargs passed May 17 09:59:39.047605 ignition[813]: Ignition finished successfully May 17 09:59:39.051533 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 17 09:59:39.053575 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 17 09:59:39.083175 ignition[821]: Ignition 2.21.0 May 17 09:59:39.083195 ignition[821]: Stage: disks May 17 09:59:39.083333 ignition[821]: no configs at "/usr/lib/ignition/base.d" May 17 09:59:39.083343 ignition[821]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 09:59:39.084977 ignition[821]: disks: disks passed May 17 09:59:39.087072 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 17 09:59:39.085039 ignition[821]: Ignition finished successfully May 17 09:59:39.088457 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 17 09:59:39.089764 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 17 09:59:39.091540 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 09:59:39.092935 systemd[1]: Reached target sysinit.target - System Initialization. May 17 09:59:39.094513 systemd[1]: Reached target basic.target - Basic System. May 17 09:59:39.097298 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 17 09:59:39.134582 systemd-fsck[831]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 17 09:59:39.139421 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 17 09:59:39.141736 systemd[1]: Mounting sysroot.mount - /sysroot... May 17 09:59:39.201515 kernel: EXT4-fs (vda9): mounted filesystem 67918cfe-435f-4364-8813-054055159d36 r/w with ordered data mode. Quota mode: none. May 17 09:59:39.201855 systemd[1]: Mounted sysroot.mount - /sysroot. May 17 09:59:39.202955 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 17 09:59:39.205127 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 09:59:39.206759 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 17 09:59:39.207745 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 17 09:59:39.207800 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 09:59:39.207823 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 17 09:59:39.220927 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 17 09:59:39.223245 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 17 09:59:39.227725 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (839) May 17 09:59:39.227748 kernel: BTRFS info (device vda6): first mount of filesystem f89a5612-7786-479a-a46f-af205a06b6f7 May 17 09:59:39.227758 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 17 09:59:39.227767 kernel: BTRFS info (device vda6): using free-space-tree May 17 09:59:39.231702 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 09:59:39.269331 initrd-setup-root[863]: cut: /sysroot/etc/passwd: No such file or directory May 17 09:59:39.273589 initrd-setup-root[870]: cut: /sysroot/etc/group: No such file or directory May 17 09:59:39.277383 initrd-setup-root[877]: cut: /sysroot/etc/shadow: No such file or directory May 17 09:59:39.281421 initrd-setup-root[884]: cut: /sysroot/etc/gshadow: No such file or directory May 17 09:59:39.353795 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 17 09:59:39.357262 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 17 09:59:39.358886 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 17 09:59:39.373515 kernel: BTRFS info (device vda6): last unmount of filesystem f89a5612-7786-479a-a46f-af205a06b6f7 May 17 09:59:39.382724 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 17 09:59:39.391270 ignition[953]: INFO : Ignition 2.21.0 May 17 09:59:39.391270 ignition[953]: INFO : Stage: mount May 17 09:59:39.392989 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 09:59:39.392989 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 09:59:39.392989 ignition[953]: INFO : mount: mount passed May 17 09:59:39.392989 ignition[953]: INFO : Ignition finished successfully May 17 09:59:39.394432 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 17 09:59:39.397003 systemd[1]: Starting ignition-files.service - Ignition (files)... May 17 09:59:39.911778 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 17 09:59:39.913244 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 09:59:39.931881 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (965) May 17 09:59:39.931916 kernel: BTRFS info (device vda6): first mount of filesystem f89a5612-7786-479a-a46f-af205a06b6f7 May 17 09:59:39.931926 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 17 09:59:39.933501 kernel: BTRFS info (device vda6): using free-space-tree May 17 09:59:39.935549 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 09:59:39.973748 ignition[983]: INFO : Ignition 2.21.0 May 17 09:59:39.973748 ignition[983]: INFO : Stage: files May 17 09:59:39.976405 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 09:59:39.976405 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 09:59:39.976405 ignition[983]: DEBUG : files: compiled without relabeling support, skipping May 17 09:59:39.979741 ignition[983]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 09:59:39.979741 ignition[983]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 09:59:39.979741 ignition[983]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 09:59:39.979741 ignition[983]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 09:59:39.979741 ignition[983]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 09:59:39.979035 unknown[983]: wrote ssh authorized keys file for user: core May 17 09:59:39.986531 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 17 09:59:39.986531 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 May 17 09:59:40.050917 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 17 09:59:40.191310 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 17 09:59:40.191310 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 17 09:59:40.195020 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 17 09:59:40.487420 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 17 09:59:40.536395 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 17 09:59:40.538419 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 17 09:59:40.538419 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 17 09:59:40.538419 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 09:59:40.538419 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 09:59:40.538419 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 09:59:40.538419 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 09:59:40.538419 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 09:59:40.538419 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 09:59:40.551738 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 09:59:40.551738 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 09:59:40.551738 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 17 09:59:40.551738 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 17 09:59:40.551738 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 17 09:59:40.551738 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 May 17 09:59:40.791680 systemd-networkd[799]: eth0: Gained IPv6LL May 17 09:59:40.933401 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 17 09:59:41.157031 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 17 09:59:41.157031 ignition[983]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 17 09:59:41.161174 ignition[983]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 09:59:41.161174 ignition[983]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 09:59:41.161174 ignition[983]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 17 09:59:41.161174 ignition[983]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 17 09:59:41.161174 ignition[983]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 17 09:59:41.161174 ignition[983]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 17 09:59:41.161174 ignition[983]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 17 09:59:41.161174 ignition[983]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 17 09:59:41.176994 ignition[983]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 17 09:59:41.180323 ignition[983]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 17 09:59:41.181847 ignition[983]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 17 09:59:41.181847 ignition[983]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 17 09:59:41.181847 ignition[983]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 17 09:59:41.181847 ignition[983]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 09:59:41.181847 ignition[983]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 09:59:41.181847 ignition[983]: INFO : files: files passed May 17 09:59:41.181847 ignition[983]: INFO : Ignition finished successfully May 17 09:59:41.183731 systemd[1]: Finished ignition-files.service - Ignition (files). May 17 09:59:41.186532 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 17 09:59:41.189438 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 17 09:59:41.204920 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 09:59:41.205035 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 17 09:59:41.209991 initrd-setup-root-after-ignition[1011]: grep: /sysroot/oem/oem-release: No such file or directory May 17 09:59:41.211378 initrd-setup-root-after-ignition[1013]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 09:59:41.211378 initrd-setup-root-after-ignition[1013]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 17 09:59:41.216921 initrd-setup-root-after-ignition[1017]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 09:59:41.211652 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 09:59:41.214464 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 17 09:59:41.216351 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 17 09:59:41.265531 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 09:59:41.266647 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 17 09:59:41.268072 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 17 09:59:41.269885 systemd[1]: Reached target initrd.target - Initrd Default Target. May 17 09:59:41.271636 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 17 09:59:41.272415 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 17 09:59:41.294836 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 09:59:41.299188 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 17 09:59:41.318876 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 17 09:59:41.320115 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 09:59:41.321930 systemd[1]: Stopped target timers.target - Timer Units. May 17 09:59:41.323447 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 09:59:41.323585 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 09:59:41.325932 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 17 09:59:41.327654 systemd[1]: Stopped target basic.target - Basic System. May 17 09:59:41.329054 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 17 09:59:41.330574 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 17 09:59:41.332545 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 17 09:59:41.334534 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 17 09:59:41.336462 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 17 09:59:41.338294 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 17 09:59:41.340111 systemd[1]: Stopped target sysinit.target - System Initialization. May 17 09:59:41.341836 systemd[1]: Stopped target local-fs.target - Local File Systems. May 17 09:59:41.343344 systemd[1]: Stopped target swap.target - Swaps. May 17 09:59:41.344729 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 09:59:41.344848 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 17 09:59:41.346858 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 17 09:59:41.348568 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 09:59:41.350458 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 17 09:59:41.350545 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 09:59:41.352556 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 09:59:41.352665 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 17 09:59:41.355354 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 09:59:41.355476 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 17 09:59:41.357354 systemd[1]: Stopped target paths.target - Path Units. May 17 09:59:41.358785 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 09:59:41.362517 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 09:59:41.363800 systemd[1]: Stopped target slices.target - Slice Units. May 17 09:59:41.365665 systemd[1]: Stopped target sockets.target - Socket Units. May 17 09:59:41.367221 systemd[1]: iscsid.socket: Deactivated successfully. May 17 09:59:41.367324 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 17 09:59:41.368762 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 09:59:41.368841 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 09:59:41.370377 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 09:59:41.370509 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 09:59:41.372187 systemd[1]: ignition-files.service: Deactivated successfully. May 17 09:59:41.372303 systemd[1]: Stopped ignition-files.service - Ignition (files). May 17 09:59:41.374560 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 17 09:59:41.377081 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 17 09:59:41.378331 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 09:59:41.378452 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 17 09:59:41.380380 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 09:59:41.380481 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 17 09:59:41.385594 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 09:59:41.385697 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 17 09:59:41.393446 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 09:59:41.399325 ignition[1037]: INFO : Ignition 2.21.0 May 17 09:59:41.399325 ignition[1037]: INFO : Stage: umount May 17 09:59:41.401857 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 09:59:41.401857 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 09:59:41.401857 ignition[1037]: INFO : umount: umount passed May 17 09:59:41.401857 ignition[1037]: INFO : Ignition finished successfully May 17 09:59:41.402777 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 09:59:41.403556 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 17 09:59:41.405160 systemd[1]: Stopped target network.target - Network. May 17 09:59:41.407321 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 09:59:41.407392 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 17 09:59:41.409227 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 09:59:41.409288 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 17 09:59:41.410820 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 09:59:41.410873 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 17 09:59:41.412291 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 17 09:59:41.412333 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 17 09:59:41.413966 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 17 09:59:41.415501 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 17 09:59:41.420275 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 09:59:41.420387 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 17 09:59:41.423285 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 17 09:59:41.423688 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 17 09:59:41.423736 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 09:59:41.428215 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 17 09:59:41.428451 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 09:59:41.428558 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 17 09:59:41.432329 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 17 09:59:41.432740 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 17 09:59:41.433960 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 09:59:41.433997 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 17 09:59:41.436863 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 17 09:59:41.437963 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 09:59:41.438041 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 09:59:41.440441 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 09:59:41.440509 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 17 09:59:41.443575 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 09:59:41.443618 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 17 09:59:41.445792 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 09:59:41.449831 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 17 09:59:41.463103 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 09:59:41.464651 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 17 09:59:41.465884 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 09:59:41.465930 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 17 09:59:41.467672 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 09:59:41.467795 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 09:59:41.469753 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 09:59:41.469847 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 17 09:59:41.472007 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 09:59:41.472077 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 17 09:59:41.473169 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 09:59:41.473201 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 17 09:59:41.475259 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 09:59:41.475310 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 17 09:59:41.477799 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 09:59:41.477850 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 17 09:59:41.480419 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 09:59:41.480469 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 09:59:41.484128 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 17 09:59:41.485396 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 17 09:59:41.485455 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 17 09:59:41.488328 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 09:59:41.488372 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 09:59:41.491262 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 17 09:59:41.491306 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 09:59:41.494643 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 09:59:41.494687 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 17 09:59:41.496840 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 09:59:41.496883 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 09:59:41.510752 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 09:59:41.510871 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 17 09:59:41.513085 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 17 09:59:41.515612 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 17 09:59:41.544634 systemd[1]: Switching root. May 17 09:59:41.584369 systemd-journald[243]: Journal stopped May 17 09:59:42.371547 systemd-journald[243]: Received SIGTERM from PID 1 (systemd). May 17 09:59:42.371599 kernel: SELinux: policy capability network_peer_controls=1 May 17 09:59:42.371613 kernel: SELinux: policy capability open_perms=1 May 17 09:59:42.371622 kernel: SELinux: policy capability extended_socket_class=1 May 17 09:59:42.371631 kernel: SELinux: policy capability always_check_network=0 May 17 09:59:42.371641 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 09:59:42.371652 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 09:59:42.371662 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 09:59:42.371671 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 09:59:42.371680 kernel: SELinux: policy capability userspace_initial_context=0 May 17 09:59:42.371689 kernel: audit: type=1403 audit(1747475981.764:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 09:59:42.371704 systemd[1]: Successfully loaded SELinux policy in 40.517ms. May 17 09:59:42.371723 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.340ms. May 17 09:59:42.371734 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 17 09:59:42.371745 systemd[1]: Detected virtualization kvm. May 17 09:59:42.371758 systemd[1]: Detected architecture arm64. May 17 09:59:42.371768 systemd[1]: Detected first boot. May 17 09:59:42.371777 systemd[1]: Initializing machine ID from VM UUID. May 17 09:59:42.371787 zram_generator::config[1085]: No configuration found. May 17 09:59:42.371797 kernel: NET: Registered PF_VSOCK protocol family May 17 09:59:42.371809 systemd[1]: Populated /etc with preset unit settings. May 17 09:59:42.371820 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 17 09:59:42.371830 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 17 09:59:42.371840 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 17 09:59:42.371850 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 17 09:59:42.371860 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 17 09:59:42.371870 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 17 09:59:42.371881 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 17 09:59:42.371893 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 17 09:59:42.371903 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 17 09:59:42.371914 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 17 09:59:42.371924 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 17 09:59:42.371934 systemd[1]: Created slice user.slice - User and Session Slice. May 17 09:59:42.371944 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 09:59:42.371954 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 09:59:42.371964 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 17 09:59:42.371975 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 17 09:59:42.371986 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 17 09:59:42.371996 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 09:59:42.372006 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 17 09:59:42.372016 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 09:59:42.372026 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 09:59:42.372036 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 17 09:59:42.372049 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 17 09:59:42.372059 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 17 09:59:42.372070 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 17 09:59:42.372081 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 09:59:42.372091 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 09:59:42.372101 systemd[1]: Reached target slices.target - Slice Units. May 17 09:59:42.372111 systemd[1]: Reached target swap.target - Swaps. May 17 09:59:42.372121 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 17 09:59:42.372131 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 17 09:59:42.372141 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 17 09:59:42.372151 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 09:59:42.372162 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 09:59:42.372172 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 09:59:42.372182 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 17 09:59:42.372192 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 17 09:59:42.372202 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 17 09:59:42.372212 systemd[1]: Mounting media.mount - External Media Directory... May 17 09:59:42.372222 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 17 09:59:42.372236 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 17 09:59:42.372251 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 17 09:59:42.372268 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 09:59:42.372278 systemd[1]: Reached target machines.target - Containers. May 17 09:59:42.372288 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 17 09:59:42.372298 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 09:59:42.372308 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 09:59:42.372318 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 17 09:59:42.372328 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 09:59:42.372338 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 09:59:42.372349 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 09:59:42.372359 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 17 09:59:42.372369 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 09:59:42.372379 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 09:59:42.372389 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 17 09:59:42.372399 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 17 09:59:42.372410 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 17 09:59:42.372421 systemd[1]: Stopped systemd-fsck-usr.service. May 17 09:59:42.372432 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 17 09:59:42.372443 kernel: fuse: init (API version 7.41) May 17 09:59:42.372453 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 09:59:42.372462 kernel: loop: module loaded May 17 09:59:42.372472 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 09:59:42.372482 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 17 09:59:42.372534 kernel: ACPI: bus type drm_connector registered May 17 09:59:42.372551 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 17 09:59:42.372562 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 17 09:59:42.372571 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 09:59:42.372583 systemd[1]: verity-setup.service: Deactivated successfully. May 17 09:59:42.372593 systemd[1]: Stopped verity-setup.service. May 17 09:59:42.372603 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 17 09:59:42.372613 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 17 09:59:42.372646 systemd-journald[1153]: Collecting audit messages is disabled. May 17 09:59:42.372671 systemd[1]: Mounted media.mount - External Media Directory. May 17 09:59:42.372681 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 17 09:59:42.372692 systemd-journald[1153]: Journal started May 17 09:59:42.372713 systemd-journald[1153]: Runtime Journal (/run/log/journal/26bcbc84c6814c7b86140654e08854dd) is 6M, max 48.5M, 42.4M free. May 17 09:59:42.142277 systemd[1]: Queued start job for default target multi-user.target. May 17 09:59:42.163428 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 17 09:59:42.163799 systemd[1]: systemd-journald.service: Deactivated successfully. May 17 09:59:42.375654 systemd[1]: Started systemd-journald.service - Journal Service. May 17 09:59:42.376318 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 17 09:59:42.377630 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 17 09:59:42.378843 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 17 09:59:42.380304 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 09:59:42.381782 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 09:59:42.381950 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 17 09:59:42.383375 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 09:59:42.383596 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 09:59:42.384938 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 09:59:42.385107 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 09:59:42.387835 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 09:59:42.388015 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 09:59:42.389433 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 09:59:42.389603 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 17 09:59:42.390873 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 09:59:42.391036 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 09:59:42.392431 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 09:59:42.393848 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 17 09:59:42.395364 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 17 09:59:42.396892 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 17 09:59:42.409260 systemd[1]: Reached target network-pre.target - Preparation for Network. May 17 09:59:42.411785 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 17 09:59:42.413834 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 17 09:59:42.414950 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 09:59:42.414991 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 09:59:42.416899 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 17 09:59:42.427434 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 17 09:59:42.430416 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 09:59:42.432473 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 17 09:59:42.434601 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 17 09:59:42.438448 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 09:59:42.442325 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 17 09:59:42.442887 systemd-journald[1153]: Time spent on flushing to /var/log/journal/26bcbc84c6814c7b86140654e08854dd is 13.803ms for 884 entries. May 17 09:59:42.442887 systemd-journald[1153]: System Journal (/var/log/journal/26bcbc84c6814c7b86140654e08854dd) is 8M, max 195.6M, 187.6M free. May 17 09:59:42.464801 systemd-journald[1153]: Received client request to flush runtime journal. May 17 09:59:42.444306 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 09:59:42.445310 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 09:59:42.447397 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 17 09:59:42.451840 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 09:59:42.454624 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 09:59:42.459081 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 17 09:59:42.460516 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 17 09:59:42.461980 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 17 09:59:42.467276 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 17 09:59:42.474277 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 17 09:59:42.480316 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 17 09:59:42.482101 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 09:59:42.489412 kernel: loop0: detected capacity change from 0 to 107312 May 17 09:59:42.495677 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. May 17 09:59:42.495695 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. May 17 09:59:42.501038 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 09:59:42.502954 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 17 09:59:42.506725 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 09:59:42.508630 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 17 09:59:42.528527 kernel: loop1: detected capacity change from 0 to 207008 May 17 09:59:42.556518 kernel: loop2: detected capacity change from 0 to 138376 May 17 09:59:42.562996 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 17 09:59:42.566662 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 09:59:42.584528 kernel: loop3: detected capacity change from 0 to 107312 May 17 09:59:42.589323 systemd-tmpfiles[1224]: ACLs are not supported, ignoring. May 17 09:59:42.589341 systemd-tmpfiles[1224]: ACLs are not supported, ignoring. May 17 09:59:42.593544 kernel: loop4: detected capacity change from 0 to 207008 May 17 09:59:42.593658 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 09:59:42.605557 kernel: loop5: detected capacity change from 0 to 138376 May 17 09:59:42.613172 (sd-merge)[1227]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 17 09:59:42.613587 (sd-merge)[1227]: Merged extensions into '/usr'. May 17 09:59:42.617457 systemd[1]: Reload requested from client PID 1202 ('systemd-sysext') (unit systemd-sysext.service)... May 17 09:59:42.617474 systemd[1]: Reloading... May 17 09:59:42.674509 zram_generator::config[1254]: No configuration found. May 17 09:59:42.753030 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 09:59:42.754165 ldconfig[1197]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 09:59:42.815202 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 09:59:42.815576 systemd[1]: Reloading finished in 197 ms. May 17 09:59:42.835100 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 17 09:59:42.836559 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 17 09:59:42.847833 systemd[1]: Starting ensure-sysext.service... May 17 09:59:42.849586 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 09:59:42.861408 systemd[1]: Reload requested from client PID 1288 ('systemctl') (unit ensure-sysext.service)... May 17 09:59:42.861423 systemd[1]: Reloading... May 17 09:59:42.867296 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 17 09:59:42.867640 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 17 09:59:42.867935 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 09:59:42.868194 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 17 09:59:42.868892 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 09:59:42.869189 systemd-tmpfiles[1289]: ACLs are not supported, ignoring. May 17 09:59:42.869317 systemd-tmpfiles[1289]: ACLs are not supported, ignoring. May 17 09:59:42.874192 systemd-tmpfiles[1289]: Detected autofs mount point /boot during canonicalization of boot. May 17 09:59:42.874402 systemd-tmpfiles[1289]: Skipping /boot May 17 09:59:42.883793 systemd-tmpfiles[1289]: Detected autofs mount point /boot during canonicalization of boot. May 17 09:59:42.883906 systemd-tmpfiles[1289]: Skipping /boot May 17 09:59:42.915542 zram_generator::config[1316]: No configuration found. May 17 09:59:42.979273 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 09:59:43.040886 systemd[1]: Reloading finished in 179 ms. May 17 09:59:43.049048 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 17 09:59:43.051727 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 09:59:43.064481 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 17 09:59:43.066708 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 17 09:59:43.068924 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 17 09:59:43.074656 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 09:59:43.077755 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 09:59:43.080159 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 17 09:59:43.085907 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 09:59:43.097532 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 09:59:43.099579 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 09:59:43.101812 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 09:59:43.104306 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 09:59:43.104428 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 17 09:59:43.105740 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 17 09:59:43.108342 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 09:59:43.109746 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 09:59:43.111529 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 09:59:43.111677 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 09:59:43.113420 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 09:59:43.113584 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 09:59:43.121394 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 09:59:43.124785 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 09:59:43.127904 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 09:59:43.131349 systemd-udevd[1363]: Using default interface naming scheme 'v255'. May 17 09:59:43.131676 augenrules[1387]: No rules May 17 09:59:43.139717 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 09:59:43.140893 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 09:59:43.141071 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 17 09:59:43.142343 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 17 09:59:43.146808 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 17 09:59:43.150459 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 09:59:43.152726 systemd[1]: audit-rules.service: Deactivated successfully. May 17 09:59:43.165665 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 17 09:59:43.179523 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 17 09:59:43.181214 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 09:59:43.181376 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 09:59:43.184124 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 09:59:43.184282 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 09:59:43.185972 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 09:59:43.187518 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 09:59:43.189116 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 17 09:59:43.193009 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 17 09:59:43.203584 systemd[1]: Finished ensure-sysext.service. May 17 09:59:43.210596 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 17 09:59:43.213685 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 09:59:43.215678 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 09:59:43.218968 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 09:59:43.222581 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 09:59:43.229704 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 09:59:43.231421 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 09:59:43.231472 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 17 09:59:43.234231 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 09:59:43.239715 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 17 09:59:43.241667 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 09:59:43.242125 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 09:59:43.242310 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 09:59:43.243769 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 09:59:43.243956 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 09:59:43.248696 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 09:59:43.248874 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 09:59:43.253357 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 09:59:43.254991 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 09:59:43.256867 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 17 09:59:43.256971 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 09:59:43.257020 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 09:59:43.258638 augenrules[1430]: /sbin/augenrules: No change May 17 09:59:43.267542 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 17 09:59:43.268191 augenrules[1462]: No rules May 17 09:59:43.270474 systemd[1]: audit-rules.service: Deactivated successfully. May 17 09:59:43.270726 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 17 09:59:43.318559 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 17 09:59:43.320960 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 17 09:59:43.376835 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 17 09:59:43.379608 systemd-networkd[1440]: lo: Link UP May 17 09:59:43.379828 systemd-networkd[1440]: lo: Gained carrier May 17 09:59:43.380643 systemd-networkd[1440]: Enumeration completed May 17 09:59:43.380828 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 09:59:43.381334 systemd-networkd[1440]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 09:59:43.381407 systemd-networkd[1440]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 09:59:43.381860 systemd-networkd[1440]: eth0: Link UP May 17 09:59:43.382066 systemd-networkd[1440]: eth0: Gained carrier May 17 09:59:43.382129 systemd-networkd[1440]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 09:59:43.386580 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 17 09:59:43.389761 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 17 09:59:43.390352 systemd-resolved[1356]: Positive Trust Anchors: May 17 09:59:43.390362 systemd-resolved[1356]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 09:59:43.390392 systemd-resolved[1356]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 09:59:43.391299 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 17 09:59:43.392682 systemd[1]: Reached target time-set.target - System Time Set. May 17 09:59:43.397849 systemd-resolved[1356]: Defaulting to hostname 'linux'. May 17 09:59:43.402804 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 09:59:43.405599 systemd[1]: Reached target network.target - Network. May 17 09:59:43.406528 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 09:59:43.406553 systemd-networkd[1440]: eth0: DHCPv4 address 10.0.0.72/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 17 09:59:43.407677 systemd[1]: Reached target sysinit.target - System Initialization. May 17 09:59:43.408225 systemd-timesyncd[1446]: Network configuration changed, trying to establish connection. May 17 09:59:43.408865 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 17 09:59:43.409609 systemd-timesyncd[1446]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 17 09:59:43.409717 systemd-timesyncd[1446]: Initial clock synchronization to Sat 2025-05-17 09:59:43.804708 UTC. May 17 09:59:43.410344 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 17 09:59:43.411737 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 17 09:59:43.412621 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 17 09:59:43.413552 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 17 09:59:43.414458 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 09:59:43.414511 systemd[1]: Reached target paths.target - Path Units. May 17 09:59:43.415383 systemd[1]: Reached target timers.target - Timer Units. May 17 09:59:43.417202 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 17 09:59:43.419572 systemd[1]: Starting docker.socket - Docker Socket for the API... May 17 09:59:43.423035 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 17 09:59:43.424457 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 17 09:59:43.425557 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 17 09:59:43.429103 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 17 09:59:43.430779 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 17 09:59:43.434004 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 17 09:59:43.436726 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 17 09:59:43.439098 systemd[1]: Reached target sockets.target - Socket Units. May 17 09:59:43.440083 systemd[1]: Reached target basic.target - Basic System. May 17 09:59:43.441573 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 17 09:59:43.441602 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 17 09:59:43.443698 systemd[1]: Starting containerd.service - containerd container runtime... May 17 09:59:43.446521 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 17 09:59:43.453078 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 17 09:59:43.456609 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 17 09:59:43.459718 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 17 09:59:43.460742 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 17 09:59:43.462120 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 17 09:59:43.464128 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 17 09:59:43.469062 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 17 09:59:43.472339 jq[1504]: false May 17 09:59:43.472783 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 17 09:59:43.479697 systemd[1]: Starting systemd-logind.service - User Login Management... May 17 09:59:43.481677 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 17 09:59:43.482435 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 17 09:59:43.483146 systemd[1]: Starting update-engine.service - Update Engine... May 17 09:59:43.485609 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 17 09:59:43.494009 extend-filesystems[1505]: Found loop3 May 17 09:59:43.494906 extend-filesystems[1505]: Found loop4 May 17 09:59:43.494906 extend-filesystems[1505]: Found loop5 May 17 09:59:43.498411 extend-filesystems[1505]: Found vda May 17 09:59:43.498411 extend-filesystems[1505]: Found vda1 May 17 09:59:43.498411 extend-filesystems[1505]: Found vda2 May 17 09:59:43.498411 extend-filesystems[1505]: Found vda3 May 17 09:59:43.498411 extend-filesystems[1505]: Found usr May 17 09:59:43.498411 extend-filesystems[1505]: Found vda4 May 17 09:59:43.498411 extend-filesystems[1505]: Found vda6 May 17 09:59:43.498411 extend-filesystems[1505]: Found vda7 May 17 09:59:43.498411 extend-filesystems[1505]: Found vda9 May 17 09:59:43.498411 extend-filesystems[1505]: Checking size of /dev/vda9 May 17 09:59:43.496625 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 17 09:59:43.517615 extend-filesystems[1505]: Resized partition /dev/vda9 May 17 09:59:43.500538 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 09:59:43.501585 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 17 09:59:43.519037 extend-filesystems[1529]: resize2fs 1.47.2 (1-Jan-2025) May 17 09:59:43.501869 systemd[1]: motdgen.service: Deactivated successfully. May 17 09:59:43.520948 jq[1519]: true May 17 09:59:43.502018 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 17 09:59:43.507869 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 09:59:43.508055 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 17 09:59:43.529151 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 17 09:59:43.546456 tar[1525]: linux-arm64/LICENSE May 17 09:59:43.546456 tar[1525]: linux-arm64/helm May 17 09:59:43.544891 (ntainerd)[1530]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 17 09:59:43.549931 update_engine[1517]: I20250517 09:59:43.549736 1517 main.cc:92] Flatcar Update Engine starting May 17 09:59:43.550526 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 09:59:43.579528 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 17 09:59:43.585766 jq[1528]: true May 17 09:59:43.593071 systemd-logind[1516]: Watching system buttons on /dev/input/event0 (Power Button) May 17 09:59:43.593539 extend-filesystems[1529]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 17 09:59:43.593539 extend-filesystems[1529]: old_desc_blocks = 1, new_desc_blocks = 1 May 17 09:59:43.593539 extend-filesystems[1529]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 17 09:59:43.603399 extend-filesystems[1505]: Resized filesystem in /dev/vda9 May 17 09:59:43.598984 dbus-daemon[1502]: [system] SELinux support is enabled May 17 09:59:43.593562 systemd-logind[1516]: New seat seat0. May 17 09:59:43.594965 systemd[1]: Started systemd-logind.service - User Login Management. May 17 09:59:43.598932 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 09:59:43.606722 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 17 09:59:43.608606 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 17 09:59:43.609587 update_engine[1517]: I20250517 09:59:43.609474 1517 update_check_scheduler.cc:74] Next update check in 6m34s May 17 09:59:43.616614 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 09:59:43.616642 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 17 09:59:43.618208 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 09:59:43.618236 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 17 09:59:43.619315 dbus-daemon[1502]: [system] Successfully activated service 'org.freedesktop.systemd1' May 17 09:59:43.619744 systemd[1]: Started update-engine.service - Update Engine. May 17 09:59:43.627801 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 17 09:59:43.650508 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 09:59:43.657630 bash[1564]: Updated "/home/core/.ssh/authorized_keys" May 17 09:59:43.661447 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 17 09:59:43.663198 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 17 09:59:43.690578 locksmithd[1560]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 09:59:43.765497 containerd[1530]: time="2025-05-17T09:59:43Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 17 09:59:43.766239 containerd[1530]: time="2025-05-17T09:59:43.766197200Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 17 09:59:43.777160 containerd[1530]: time="2025-05-17T09:59:43.777126680Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.92µs" May 17 09:59:43.777160 containerd[1530]: time="2025-05-17T09:59:43.777157480Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 17 09:59:43.777247 containerd[1530]: time="2025-05-17T09:59:43.777176440Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 17 09:59:43.778492 containerd[1530]: time="2025-05-17T09:59:43.777329800Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 17 09:59:43.778492 containerd[1530]: time="2025-05-17T09:59:43.777352720Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 17 09:59:43.778492 containerd[1530]: time="2025-05-17T09:59:43.777376040Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 17 09:59:43.778492 containerd[1530]: time="2025-05-17T09:59:43.777422760Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 17 09:59:43.778492 containerd[1530]: time="2025-05-17T09:59:43.777434560Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 17 09:59:43.778492 containerd[1530]: time="2025-05-17T09:59:43.777728200Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 17 09:59:43.778492 containerd[1530]: time="2025-05-17T09:59:43.777743040Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 17 09:59:43.778492 containerd[1530]: time="2025-05-17T09:59:43.777754080Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 17 09:59:43.778492 containerd[1530]: time="2025-05-17T09:59:43.777762680Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 17 09:59:43.778492 containerd[1530]: time="2025-05-17T09:59:43.777832080Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 17 09:59:43.778492 containerd[1530]: time="2025-05-17T09:59:43.778007720Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 17 09:59:43.778673 containerd[1530]: time="2025-05-17T09:59:43.778032240Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 17 09:59:43.778673 containerd[1530]: time="2025-05-17T09:59:43.778042400Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 17 09:59:43.778673 containerd[1530]: time="2025-05-17T09:59:43.778079600Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 17 09:59:43.778673 containerd[1530]: time="2025-05-17T09:59:43.778274640Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 17 09:59:43.778673 containerd[1530]: time="2025-05-17T09:59:43.778339760Z" level=info msg="metadata content store policy set" policy=shared May 17 09:59:43.781656 containerd[1530]: time="2025-05-17T09:59:43.781628520Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 17 09:59:43.781693 containerd[1530]: time="2025-05-17T09:59:43.781676560Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 17 09:59:43.781711 containerd[1530]: time="2025-05-17T09:59:43.781690360Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 17 09:59:43.781711 containerd[1530]: time="2025-05-17T09:59:43.781703720Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 17 09:59:43.781742 containerd[1530]: time="2025-05-17T09:59:43.781714920Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 17 09:59:43.781742 containerd[1530]: time="2025-05-17T09:59:43.781727200Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 17 09:59:43.781742 containerd[1530]: time="2025-05-17T09:59:43.781737760Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 17 09:59:43.781793 containerd[1530]: time="2025-05-17T09:59:43.781748920Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 17 09:59:43.781793 containerd[1530]: time="2025-05-17T09:59:43.781759480Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 17 09:59:43.781793 containerd[1530]: time="2025-05-17T09:59:43.781769280Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 17 09:59:43.781793 containerd[1530]: time="2025-05-17T09:59:43.781778520Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 17 09:59:43.781793 containerd[1530]: time="2025-05-17T09:59:43.781791920Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 17 09:59:43.781926 containerd[1530]: time="2025-05-17T09:59:43.781904880Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 17 09:59:43.781949 containerd[1530]: time="2025-05-17T09:59:43.781931480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 17 09:59:43.781949 containerd[1530]: time="2025-05-17T09:59:43.781946120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 17 09:59:43.781985 containerd[1530]: time="2025-05-17T09:59:43.781956720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 17 09:59:43.781985 containerd[1530]: time="2025-05-17T09:59:43.781966680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 17 09:59:43.781985 containerd[1530]: time="2025-05-17T09:59:43.781977440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 17 09:59:43.782034 containerd[1530]: time="2025-05-17T09:59:43.781994800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 17 09:59:43.782034 containerd[1530]: time="2025-05-17T09:59:43.782008880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 17 09:59:43.782034 containerd[1530]: time="2025-05-17T09:59:43.782020200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 17 09:59:43.782034 containerd[1530]: time="2025-05-17T09:59:43.782030640Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 17 09:59:43.782100 containerd[1530]: time="2025-05-17T09:59:43.782040400Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 17 09:59:43.782237 containerd[1530]: time="2025-05-17T09:59:43.782220760Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 17 09:59:43.782273 containerd[1530]: time="2025-05-17T09:59:43.782247880Z" level=info msg="Start snapshots syncer" May 17 09:59:43.782291 containerd[1530]: time="2025-05-17T09:59:43.782276360Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 17 09:59:43.782523 containerd[1530]: time="2025-05-17T09:59:43.782467400Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 17 09:59:43.782602 containerd[1530]: time="2025-05-17T09:59:43.782538520Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 17 09:59:43.782621 containerd[1530]: time="2025-05-17T09:59:43.782612720Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 17 09:59:43.782735 containerd[1530]: time="2025-05-17T09:59:43.782713600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 17 09:59:43.782759 containerd[1530]: time="2025-05-17T09:59:43.782741360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 17 09:59:43.782759 containerd[1530]: time="2025-05-17T09:59:43.782752600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 17 09:59:43.782792 containerd[1530]: time="2025-05-17T09:59:43.782764600Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 17 09:59:43.782792 containerd[1530]: time="2025-05-17T09:59:43.782776840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 17 09:59:43.782792 containerd[1530]: time="2025-05-17T09:59:43.782786680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 17 09:59:43.782837 containerd[1530]: time="2025-05-17T09:59:43.782796320Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 17 09:59:43.782837 containerd[1530]: time="2025-05-17T09:59:43.782825400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 17 09:59:43.782873 containerd[1530]: time="2025-05-17T09:59:43.782836400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 17 09:59:43.782873 containerd[1530]: time="2025-05-17T09:59:43.782846640Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 17 09:59:43.782905 containerd[1530]: time="2025-05-17T09:59:43.782883080Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 17 09:59:43.782905 containerd[1530]: time="2025-05-17T09:59:43.782896080Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 17 09:59:43.782937 containerd[1530]: time="2025-05-17T09:59:43.782904640Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 17 09:59:43.782937 containerd[1530]: time="2025-05-17T09:59:43.782914320Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 17 09:59:43.782937 containerd[1530]: time="2025-05-17T09:59:43.782921560Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 17 09:59:43.782937 containerd[1530]: time="2025-05-17T09:59:43.782933240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 17 09:59:43.783001 containerd[1530]: time="2025-05-17T09:59:43.782943000Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 17 09:59:43.783605 containerd[1530]: time="2025-05-17T09:59:43.783018680Z" level=info msg="runtime interface created" May 17 09:59:43.783605 containerd[1530]: time="2025-05-17T09:59:43.783027720Z" level=info msg="created NRI interface" May 17 09:59:43.783605 containerd[1530]: time="2025-05-17T09:59:43.783035960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 17 09:59:43.783605 containerd[1530]: time="2025-05-17T09:59:43.783047040Z" level=info msg="Connect containerd service" May 17 09:59:43.783605 containerd[1530]: time="2025-05-17T09:59:43.783077320Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 17 09:59:43.783735 containerd[1530]: time="2025-05-17T09:59:43.783707240Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 09:59:43.887372 containerd[1530]: time="2025-05-17T09:59:43.887259200Z" level=info msg="Start subscribing containerd event" May 17 09:59:43.887372 containerd[1530]: time="2025-05-17T09:59:43.887353480Z" level=info msg="Start recovering state" May 17 09:59:43.887527 containerd[1530]: time="2025-05-17T09:59:43.887474400Z" level=info msg="Start event monitor" May 17 09:59:43.887527 containerd[1530]: time="2025-05-17T09:59:43.887516640Z" level=info msg="Start cni network conf syncer for default" May 17 09:59:43.887527 containerd[1530]: time="2025-05-17T09:59:43.887525920Z" level=info msg="Start streaming server" May 17 09:59:43.887581 containerd[1530]: time="2025-05-17T09:59:43.887535560Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 17 09:59:43.887581 containerd[1530]: time="2025-05-17T09:59:43.887542640Z" level=info msg="runtime interface starting up..." May 17 09:59:43.887581 containerd[1530]: time="2025-05-17T09:59:43.887548440Z" level=info msg="starting plugins..." May 17 09:59:43.887635 containerd[1530]: time="2025-05-17T09:59:43.887587560Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 17 09:59:43.887635 containerd[1530]: time="2025-05-17T09:59:43.887589040Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 09:59:43.887668 containerd[1530]: time="2025-05-17T09:59:43.887644400Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 09:59:43.889299 containerd[1530]: time="2025-05-17T09:59:43.887847720Z" level=info msg="containerd successfully booted in 0.122760s" May 17 09:59:43.887921 systemd[1]: Started containerd.service - containerd container runtime. May 17 09:59:44.000498 tar[1525]: linux-arm64/README.md May 17 09:59:44.014747 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 17 09:59:44.399078 sshd_keygen[1522]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 09:59:44.418404 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 17 09:59:44.421449 systemd[1]: Starting issuegen.service - Generate /run/issue... May 17 09:59:44.441900 systemd[1]: issuegen.service: Deactivated successfully. May 17 09:59:44.442112 systemd[1]: Finished issuegen.service - Generate /run/issue. May 17 09:59:44.444712 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 17 09:59:44.462207 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 17 09:59:44.464975 systemd[1]: Started getty@tty1.service - Getty on tty1. May 17 09:59:44.467123 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 17 09:59:44.468516 systemd[1]: Reached target getty.target - Login Prompts. May 17 09:59:45.208019 systemd-networkd[1440]: eth0: Gained IPv6LL May 17 09:59:45.210442 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 17 09:59:45.212290 systemd[1]: Reached target network-online.target - Network is Online. May 17 09:59:45.214826 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 17 09:59:45.217214 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 09:59:45.236849 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 17 09:59:45.251189 systemd[1]: coreos-metadata.service: Deactivated successfully. May 17 09:59:45.251422 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 17 09:59:45.253126 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 17 09:59:45.262582 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 17 09:59:45.794785 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 09:59:45.796465 systemd[1]: Reached target multi-user.target - Multi-User System. May 17 09:59:45.798123 (kubelet)[1636]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 09:59:45.798673 systemd[1]: Startup finished in 2.081s (kernel) + 5.146s (initrd) + 4.082s (userspace) = 11.310s. May 17 09:59:46.205850 kubelet[1636]: E0517 09:59:46.205735 1636 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 09:59:46.208084 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 09:59:46.208225 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 09:59:46.208785 systemd[1]: kubelet.service: Consumed 782ms CPU time, 254M memory peak. May 17 09:59:49.277930 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 17 09:59:49.279013 systemd[1]: Started sshd@0-10.0.0.72:22-10.0.0.1:58822.service - OpenSSH per-connection server daemon (10.0.0.1:58822). May 17 09:59:49.337738 sshd[1649]: Accepted publickey for core from 10.0.0.1 port 58822 ssh2: RSA SHA256:xWUFGIGJGo+HJme0dpHyBaxVmN4GTw4PLZEwYhuGsaQ May 17 09:59:49.339351 sshd-session[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 09:59:49.346978 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 17 09:59:49.347884 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 17 09:59:49.353929 systemd-logind[1516]: New session 1 of user core. May 17 09:59:49.370262 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 17 09:59:49.373664 systemd[1]: Starting user@500.service - User Manager for UID 500... May 17 09:59:49.391471 (systemd)[1653]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 09:59:49.393717 systemd-logind[1516]: New session c1 of user core. May 17 09:59:49.503613 systemd[1653]: Queued start job for default target default.target. May 17 09:59:49.515409 systemd[1653]: Created slice app.slice - User Application Slice. May 17 09:59:49.515440 systemd[1653]: Reached target paths.target - Paths. May 17 09:59:49.515478 systemd[1653]: Reached target timers.target - Timers. May 17 09:59:49.516716 systemd[1653]: Starting dbus.socket - D-Bus User Message Bus Socket... May 17 09:59:49.525585 systemd[1653]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 17 09:59:49.525645 systemd[1653]: Reached target sockets.target - Sockets. May 17 09:59:49.525681 systemd[1653]: Reached target basic.target - Basic System. May 17 09:59:49.525708 systemd[1653]: Reached target default.target - Main User Target. May 17 09:59:49.525732 systemd[1653]: Startup finished in 126ms. May 17 09:59:49.525878 systemd[1]: Started user@500.service - User Manager for UID 500. May 17 09:59:49.527196 systemd[1]: Started session-1.scope - Session 1 of User core. May 17 09:59:49.589918 systemd[1]: Started sshd@1-10.0.0.72:22-10.0.0.1:58824.service - OpenSSH per-connection server daemon (10.0.0.1:58824). May 17 09:59:49.642723 sshd[1664]: Accepted publickey for core from 10.0.0.1 port 58824 ssh2: RSA SHA256:xWUFGIGJGo+HJme0dpHyBaxVmN4GTw4PLZEwYhuGsaQ May 17 09:59:49.643983 sshd-session[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 09:59:49.648565 systemd-logind[1516]: New session 2 of user core. May 17 09:59:49.657689 systemd[1]: Started session-2.scope - Session 2 of User core. May 17 09:59:49.709062 sshd[1666]: Connection closed by 10.0.0.1 port 58824 May 17 09:59:49.709375 sshd-session[1664]: pam_unix(sshd:session): session closed for user core May 17 09:59:49.720460 systemd[1]: sshd@1-10.0.0.72:22-10.0.0.1:58824.service: Deactivated successfully. May 17 09:59:49.722912 systemd[1]: session-2.scope: Deactivated successfully. May 17 09:59:49.724715 systemd-logind[1516]: Session 2 logged out. Waiting for processes to exit. May 17 09:59:49.726026 systemd[1]: Started sshd@2-10.0.0.72:22-10.0.0.1:58826.service - OpenSSH per-connection server daemon (10.0.0.1:58826). May 17 09:59:49.727037 systemd-logind[1516]: Removed session 2. May 17 09:59:49.780543 sshd[1672]: Accepted publickey for core from 10.0.0.1 port 58826 ssh2: RSA SHA256:xWUFGIGJGo+HJme0dpHyBaxVmN4GTw4PLZEwYhuGsaQ May 17 09:59:49.781766 sshd-session[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 09:59:49.786302 systemd-logind[1516]: New session 3 of user core. May 17 09:59:49.794699 systemd[1]: Started session-3.scope - Session 3 of User core. May 17 09:59:49.845328 sshd[1674]: Connection closed by 10.0.0.1 port 58826 May 17 09:59:49.845522 sshd-session[1672]: pam_unix(sshd:session): session closed for user core May 17 09:59:49.856413 systemd[1]: sshd@2-10.0.0.72:22-10.0.0.1:58826.service: Deactivated successfully. May 17 09:59:49.858838 systemd[1]: session-3.scope: Deactivated successfully. May 17 09:59:49.859541 systemd-logind[1516]: Session 3 logged out. Waiting for processes to exit. May 17 09:59:49.861732 systemd[1]: Started sshd@3-10.0.0.72:22-10.0.0.1:58838.service - OpenSSH per-connection server daemon (10.0.0.1:58838). May 17 09:59:49.862358 systemd-logind[1516]: Removed session 3. May 17 09:59:49.917092 sshd[1680]: Accepted publickey for core from 10.0.0.1 port 58838 ssh2: RSA SHA256:xWUFGIGJGo+HJme0dpHyBaxVmN4GTw4PLZEwYhuGsaQ May 17 09:59:49.918250 sshd-session[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 09:59:49.922626 systemd-logind[1516]: New session 4 of user core. May 17 09:59:49.928664 systemd[1]: Started session-4.scope - Session 4 of User core. May 17 09:59:49.981110 sshd[1682]: Connection closed by 10.0.0.1 port 58838 May 17 09:59:49.981399 sshd-session[1680]: pam_unix(sshd:session): session closed for user core May 17 09:59:49.995613 systemd[1]: sshd@3-10.0.0.72:22-10.0.0.1:58838.service: Deactivated successfully. May 17 09:59:49.997951 systemd[1]: session-4.scope: Deactivated successfully. May 17 09:59:49.998691 systemd-logind[1516]: Session 4 logged out. Waiting for processes to exit. May 17 09:59:50.001337 systemd[1]: Started sshd@4-10.0.0.72:22-10.0.0.1:58854.service - OpenSSH per-connection server daemon (10.0.0.1:58854). May 17 09:59:50.001979 systemd-logind[1516]: Removed session 4. May 17 09:59:50.056438 sshd[1688]: Accepted publickey for core from 10.0.0.1 port 58854 ssh2: RSA SHA256:xWUFGIGJGo+HJme0dpHyBaxVmN4GTw4PLZEwYhuGsaQ May 17 09:59:50.057763 sshd-session[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 09:59:50.061614 systemd-logind[1516]: New session 5 of user core. May 17 09:59:50.072739 systemd[1]: Started session-5.scope - Session 5 of User core. May 17 09:59:50.136396 sudo[1691]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 17 09:59:50.136711 sudo[1691]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 09:59:50.149128 sudo[1691]: pam_unix(sudo:session): session closed for user root May 17 09:59:50.150712 sshd[1690]: Connection closed by 10.0.0.1 port 58854 May 17 09:59:50.151140 sshd-session[1688]: pam_unix(sshd:session): session closed for user core May 17 09:59:50.162681 systemd[1]: sshd@4-10.0.0.72:22-10.0.0.1:58854.service: Deactivated successfully. May 17 09:59:50.164929 systemd[1]: session-5.scope: Deactivated successfully. May 17 09:59:50.165737 systemd-logind[1516]: Session 5 logged out. Waiting for processes to exit. May 17 09:59:50.168296 systemd[1]: Started sshd@5-10.0.0.72:22-10.0.0.1:58868.service - OpenSSH per-connection server daemon (10.0.0.1:58868). May 17 09:59:50.168921 systemd-logind[1516]: Removed session 5. May 17 09:59:50.225121 sshd[1697]: Accepted publickey for core from 10.0.0.1 port 58868 ssh2: RSA SHA256:xWUFGIGJGo+HJme0dpHyBaxVmN4GTw4PLZEwYhuGsaQ May 17 09:59:50.226463 sshd-session[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 09:59:50.230955 systemd-logind[1516]: New session 6 of user core. May 17 09:59:50.243680 systemd[1]: Started session-6.scope - Session 6 of User core. May 17 09:59:50.295537 sudo[1701]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 17 09:59:50.296072 sudo[1701]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 09:59:50.305350 sudo[1701]: pam_unix(sudo:session): session closed for user root May 17 09:59:50.309858 sudo[1700]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 17 09:59:50.310113 sudo[1700]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 09:59:50.317944 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 17 09:59:50.372227 augenrules[1723]: No rules May 17 09:59:50.373347 systemd[1]: audit-rules.service: Deactivated successfully. May 17 09:59:50.374654 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 17 09:59:50.375976 sudo[1700]: pam_unix(sudo:session): session closed for user root May 17 09:59:50.377418 sshd[1699]: Connection closed by 10.0.0.1 port 58868 May 17 09:59:50.377963 sshd-session[1697]: pam_unix(sshd:session): session closed for user core May 17 09:59:50.385505 systemd[1]: sshd@5-10.0.0.72:22-10.0.0.1:58868.service: Deactivated successfully. May 17 09:59:50.387663 systemd[1]: session-6.scope: Deactivated successfully. May 17 09:59:50.389012 systemd-logind[1516]: Session 6 logged out. Waiting for processes to exit. May 17 09:59:50.391311 systemd[1]: Started sshd@6-10.0.0.72:22-10.0.0.1:58884.service - OpenSSH per-connection server daemon (10.0.0.1:58884). May 17 09:59:50.391777 systemd-logind[1516]: Removed session 6. May 17 09:59:50.445501 sshd[1732]: Accepted publickey for core from 10.0.0.1 port 58884 ssh2: RSA SHA256:xWUFGIGJGo+HJme0dpHyBaxVmN4GTw4PLZEwYhuGsaQ May 17 09:59:50.446699 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 09:59:50.451266 systemd-logind[1516]: New session 7 of user core. May 17 09:59:50.461774 systemd[1]: Started session-7.scope - Session 7 of User core. May 17 09:59:50.513187 sudo[1735]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 09:59:50.513779 sudo[1735]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 09:59:50.886137 systemd[1]: Starting docker.service - Docker Application Container Engine... May 17 09:59:50.904796 (dockerd)[1756]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 17 09:59:51.169746 dockerd[1756]: time="2025-05-17T09:59:51.169632847Z" level=info msg="Starting up" May 17 09:59:51.171166 dockerd[1756]: time="2025-05-17T09:59:51.171139196Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 17 09:59:51.284761 dockerd[1756]: time="2025-05-17T09:59:51.284713446Z" level=info msg="Loading containers: start." May 17 09:59:51.292525 kernel: Initializing XFRM netlink socket May 17 09:59:51.483498 systemd-networkd[1440]: docker0: Link UP May 17 09:59:51.487212 dockerd[1756]: time="2025-05-17T09:59:51.487167394Z" level=info msg="Loading containers: done." May 17 09:59:51.508207 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3069705744-merged.mount: Deactivated successfully. May 17 09:59:51.520711 dockerd[1756]: time="2025-05-17T09:59:51.520666272Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 17 09:59:51.520816 dockerd[1756]: time="2025-05-17T09:59:51.520766581Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 17 09:59:51.520892 dockerd[1756]: time="2025-05-17T09:59:51.520871988Z" level=info msg="Initializing buildkit" May 17 09:59:51.541948 dockerd[1756]: time="2025-05-17T09:59:51.541902566Z" level=info msg="Completed buildkit initialization" May 17 09:59:51.548142 dockerd[1756]: time="2025-05-17T09:59:51.548111414Z" level=info msg="Daemon has completed initialization" May 17 09:59:51.548229 dockerd[1756]: time="2025-05-17T09:59:51.548160223Z" level=info msg="API listen on /run/docker.sock" May 17 09:59:51.548386 systemd[1]: Started docker.service - Docker Application Container Engine. May 17 09:59:52.239181 containerd[1530]: time="2025-05-17T09:59:52.239144904Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\"" May 17 09:59:52.868313 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount274430770.mount: Deactivated successfully. May 17 09:59:53.598607 containerd[1530]: time="2025-05-17T09:59:53.598485595Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 09:59:53.598983 containerd[1530]: time="2025-05-17T09:59:53.598957380Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.5: active requests=0, bytes read=26326313" May 17 09:59:53.599917 containerd[1530]: time="2025-05-17T09:59:53.599844647Z" level=info msg="ImageCreate event name:\"sha256:42968274c3d27c41cdc146f5442f122c1c74960e299c13e2f348d2fe835a9134\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 09:59:53.602399 containerd[1530]: time="2025-05-17T09:59:53.602346709Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 09:59:53.603449 containerd[1530]: time="2025-05-17T09:59:53.603414011Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.5\" with image id \"sha256:42968274c3d27c41cdc146f5442f122c1c74960e299c13e2f348d2fe835a9134\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\", size \"26323111\" in 1.364228651s" May 17 09:59:53.603521 containerd[1530]: time="2025-05-17T09:59:53.603451399Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\" returns image reference \"sha256:42968274c3d27c41cdc146f5442f122c1c74960e299c13e2f348d2fe835a9134\"" May 17 09:59:53.604456 containerd[1530]: time="2025-05-17T09:59:53.604431626Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\"" May 17 09:59:54.569118 containerd[1530]: time="2025-05-17T09:59:54.569024236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 09:59:54.580402 containerd[1530]: time="2025-05-17T09:59:54.580346631Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.5: active requests=0, bytes read=22530549" May 17 09:59:54.595195 containerd[1530]: time="2025-05-17T09:59:54.595153493Z" level=info msg="ImageCreate event name:\"sha256:82042044d6ea1f1e5afda9c7351883800adbde447314786c4e5a2fd9e42aab09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 09:59:54.611791 containerd[1530]: time="2025-05-17T09:59:54.611730346Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 09:59:54.612788 containerd[1530]: time="2025-05-17T09:59:54.612759754Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.5\" with image id \"sha256:82042044d6ea1f1e5afda9c7351883800adbde447314786c4e5a2fd9e42aab09\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\", size \"24066313\" in 1.008295545s" May 17 09:59:54.612833 containerd[1530]: time="2025-05-17T09:59:54.612794155Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\" returns image reference \"sha256:82042044d6ea1f1e5afda9c7351883800adbde447314786c4e5a2fd9e42aab09\"" May 17 09:59:54.613477 containerd[1530]: time="2025-05-17T09:59:54.613180715Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\"" May 17 09:59:55.534503 containerd[1530]: time="2025-05-17T09:59:55.534442006Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 09:59:55.534889 containerd[1530]: time="2025-05-17T09:59:55.534862251Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.5: active requests=0, bytes read=17484192" May 17 09:59:55.535709 containerd[1530]: time="2025-05-17T09:59:55.535655896Z" level=info msg="ImageCreate event name:\"sha256:e149336437f90109dad736c8a42e4b73c137a66579be8f3b9a456bcc62af3f9b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 09:59:55.538018 containerd[1530]: time="2025-05-17T09:59:55.537982579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 09:59:55.539068 containerd[1530]: time="2025-05-17T09:59:55.539034852Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.5\" with image id \"sha256:e149336437f90109dad736c8a42e4b73c137a66579be8f3b9a456bcc62af3f9b\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\", size \"19019974\" in 925.82388ms" May 17 09:59:55.539134 containerd[1530]: time="2025-05-17T09:59:55.539070007Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\" returns image reference \"sha256:e149336437f90109dad736c8a42e4b73c137a66579be8f3b9a456bcc62af3f9b\"" May 17 09:59:55.539840 containerd[1530]: time="2025-05-17T09:59:55.539659759Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\"" May 17 09:59:56.426707 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3489814699.mount: Deactivated successfully. May 17 09:59:56.427721 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 09:59:56.428962 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 09:59:56.578371 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 09:59:56.588880 (kubelet)[2041]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 09:59:56.658520 kubelet[2041]: E0517 09:59:56.658457 2041 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 09:59:56.662030 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 09:59:56.662166 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 09:59:56.662780 systemd[1]: kubelet.service: Consumed 158ms CPU time, 108.8M memory peak. May 17 09:59:56.891371 containerd[1530]: time="2025-05-17T09:59:56.891258464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 09:59:56.892293 containerd[1530]: time="2025-05-17T09:59:56.892133162Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.5: active requests=0, bytes read=27377377" May 17 09:59:56.893021 containerd[1530]: time="2025-05-17T09:59:56.892966210Z" level=info msg="ImageCreate event name:\"sha256:69b7afc06f22edcae3b6a7d80cdacb488a5415fd605e89534679e5ebc41375fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 09:59:56.894526 containerd[1530]: time="2025-05-17T09:59:56.894482390Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 09:59:56.895417 containerd[1530]: time="2025-05-17T09:59:56.895092844Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.5\" with image id \"sha256:69b7afc06f22edcae3b6a7d80cdacb488a5415fd605e89534679e5ebc41375fc\", repo tag \"registry.k8s.io/kube-proxy:v1.32.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\", size \"27376394\" in 1.355400288s" May 17 09:59:56.895417 containerd[1530]: time="2025-05-17T09:59:56.895126577Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\" returns image reference \"sha256:69b7afc06f22edcae3b6a7d80cdacb488a5415fd605e89534679e5ebc41375fc\"" May 17 09:59:56.895634 containerd[1530]: time="2025-05-17T09:59:56.895565704Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 17 09:59:57.363730 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3268759070.mount: Deactivated successfully. May 17 09:59:57.945741 containerd[1530]: time="2025-05-17T09:59:57.945681182Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 09:59:57.946237 containerd[1530]: time="2025-05-17T09:59:57.946201392Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" May 17 09:59:57.946939 containerd[1530]: time="2025-05-17T09:59:57.946904865Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 09:59:57.950140 containerd[1530]: time="2025-05-17T09:59:57.950084391Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 09:59:57.951827 containerd[1530]: time="2025-05-17T09:59:57.951787694Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.05619219s" May 17 09:59:57.951827 containerd[1530]: time="2025-05-17T09:59:57.951824249Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 17 09:59:57.952350 containerd[1530]: time="2025-05-17T09:59:57.952323559Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 17 09:59:58.416555 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1890560393.mount: Deactivated successfully. May 17 09:59:58.421245 containerd[1530]: time="2025-05-17T09:59:58.421189826Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 09:59:58.421890 containerd[1530]: time="2025-05-17T09:59:58.421856789Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 17 09:59:58.422480 containerd[1530]: time="2025-05-17T09:59:58.422445238Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 09:59:58.424780 containerd[1530]: time="2025-05-17T09:59:58.424744302Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 09:59:58.425485 containerd[1530]: time="2025-05-17T09:59:58.425453625Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 473.095495ms" May 17 09:59:58.425485 containerd[1530]: time="2025-05-17T09:59:58.425481516Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 17 09:59:58.426313 containerd[1530]: time="2025-05-17T09:59:58.426285193Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 17 09:59:58.889653 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4079571002.mount: Deactivated successfully. May 17 10:00:00.105075 containerd[1530]: time="2025-05-17T10:00:00.105025419Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 10:00:00.105646 containerd[1530]: time="2025-05-17T10:00:00.105610331Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" May 17 10:00:00.106269 containerd[1530]: time="2025-05-17T10:00:00.106233465Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 10:00:00.109625 containerd[1530]: time="2025-05-17T10:00:00.109585585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 10:00:00.110905 containerd[1530]: time="2025-05-17T10:00:00.110844728Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 1.684529842s" May 17 10:00:00.110905 containerd[1530]: time="2025-05-17T10:00:00.110901618Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" May 17 10:00:03.598773 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 10:00:03.599336 systemd[1]: kubelet.service: Consumed 158ms CPU time, 108.8M memory peak. May 17 10:00:03.601281 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 10:00:03.624847 systemd[1]: Reload requested from client PID 2191 ('systemctl') (unit session-7.scope)... May 17 10:00:03.624989 systemd[1]: Reloading... May 17 10:00:03.690536 zram_generator::config[2234]: No configuration found. May 17 10:00:03.819326 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 10:00:03.903148 systemd[1]: Reloading finished in 277 ms. May 17 10:00:03.965095 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 17 10:00:03.965181 systemd[1]: kubelet.service: Failed with result 'signal'. May 17 10:00:03.965456 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 10:00:03.965522 systemd[1]: kubelet.service: Consumed 85ms CPU time, 95M memory peak. May 17 10:00:03.967141 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 10:00:04.072066 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 10:00:04.076070 (kubelet)[2279]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 10:00:04.111250 kubelet[2279]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 10:00:04.111250 kubelet[2279]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 17 10:00:04.111250 kubelet[2279]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 10:00:04.111640 kubelet[2279]: I0517 10:00:04.111303 2279 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 10:00:05.335957 kubelet[2279]: I0517 10:00:05.335907 2279 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 17 10:00:05.335957 kubelet[2279]: I0517 10:00:05.335943 2279 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 10:00:05.336292 kubelet[2279]: I0517 10:00:05.336209 2279 server.go:954] "Client rotation is on, will bootstrap in background" May 17 10:00:05.366629 kubelet[2279]: E0517 10:00:05.366589 2279 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.72:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" May 17 10:00:05.368085 kubelet[2279]: I0517 10:00:05.368027 2279 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 10:00:05.374302 kubelet[2279]: I0517 10:00:05.374277 2279 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 17 10:00:05.377690 kubelet[2279]: I0517 10:00:05.377663 2279 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 10:00:05.378319 kubelet[2279]: I0517 10:00:05.378278 2279 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 10:00:05.378476 kubelet[2279]: I0517 10:00:05.378321 2279 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 10:00:05.378575 kubelet[2279]: I0517 10:00:05.378564 2279 topology_manager.go:138] "Creating topology manager with none policy" May 17 10:00:05.378575 kubelet[2279]: I0517 10:00:05.378573 2279 container_manager_linux.go:304] "Creating device plugin manager" May 17 10:00:05.378773 kubelet[2279]: I0517 10:00:05.378758 2279 state_mem.go:36] "Initialized new in-memory state store" May 17 10:00:05.381107 kubelet[2279]: I0517 10:00:05.381075 2279 kubelet.go:446] "Attempting to sync node with API server" May 17 10:00:05.381107 kubelet[2279]: I0517 10:00:05.381101 2279 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 10:00:05.381186 kubelet[2279]: I0517 10:00:05.381123 2279 kubelet.go:352] "Adding apiserver pod source" May 17 10:00:05.381186 kubelet[2279]: I0517 10:00:05.381133 2279 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 10:00:05.386658 kubelet[2279]: W0517 10:00:05.385778 2279 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.72:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused May 17 10:00:05.386658 kubelet[2279]: E0517 10:00:05.385855 2279 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.72:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" May 17 10:00:05.387171 kubelet[2279]: W0517 10:00:05.387123 2279 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.72:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused May 17 10:00:05.387351 kubelet[2279]: E0517 10:00:05.387281 2279 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.72:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" May 17 10:00:05.391140 kubelet[2279]: I0517 10:00:05.391099 2279 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 17 10:00:05.391757 kubelet[2279]: I0517 10:00:05.391732 2279 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 10:00:05.391867 kubelet[2279]: W0517 10:00:05.391854 2279 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 10:00:05.392749 kubelet[2279]: I0517 10:00:05.392723 2279 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 17 10:00:05.392794 kubelet[2279]: I0517 10:00:05.392762 2279 server.go:1287] "Started kubelet" May 17 10:00:05.392901 kubelet[2279]: I0517 10:00:05.392868 2279 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 17 10:00:05.393989 kubelet[2279]: I0517 10:00:05.393964 2279 server.go:479] "Adding debug handlers to kubelet server" May 17 10:00:05.396372 kubelet[2279]: E0517 10:00:05.396088 2279 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.72:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.72:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1840482b1af3ed7c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-17 10:00:05.392739708 +0000 UTC m=+1.313360575,LastTimestamp:2025-05-17 10:00:05.392739708 +0000 UTC m=+1.313360575,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 17 10:00:05.396512 kubelet[2279]: I0517 10:00:05.396431 2279 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 10:00:05.396781 kubelet[2279]: I0517 10:00:05.396751 2279 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 10:00:05.397613 kubelet[2279]: I0517 10:00:05.397596 2279 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 10:00:05.397726 kubelet[2279]: I0517 10:00:05.397597 2279 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 10:00:05.397771 kubelet[2279]: I0517 10:00:05.397751 2279 volume_manager.go:297] "Starting Kubelet Volume Manager" May 17 10:00:05.397878 kubelet[2279]: I0517 10:00:05.397862 2279 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 17 10:00:05.397930 kubelet[2279]: I0517 10:00:05.397916 2279 reconciler.go:26] "Reconciler: start to sync state" May 17 10:00:05.398098 kubelet[2279]: E0517 10:00:05.398068 2279 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 10:00:05.398275 kubelet[2279]: W0517 10:00:05.398230 2279 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.72:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused May 17 10:00:05.398323 kubelet[2279]: E0517 10:00:05.398280 2279 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.72:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" May 17 10:00:05.398527 kubelet[2279]: E0517 10:00:05.398406 2279 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.72:6443: connect: connection refused" interval="200ms" May 17 10:00:05.399302 kubelet[2279]: E0517 10:00:05.399243 2279 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 10:00:05.400106 kubelet[2279]: I0517 10:00:05.400083 2279 factory.go:221] Registration of the containerd container factory successfully May 17 10:00:05.401511 kubelet[2279]: I0517 10:00:05.400205 2279 factory.go:221] Registration of the systemd container factory successfully May 17 10:00:05.401511 kubelet[2279]: I0517 10:00:05.400294 2279 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 10:00:05.408780 kubelet[2279]: I0517 10:00:05.408749 2279 cpu_manager.go:221] "Starting CPU manager" policy="none" May 17 10:00:05.408780 kubelet[2279]: I0517 10:00:05.408770 2279 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 17 10:00:05.408889 kubelet[2279]: I0517 10:00:05.408788 2279 state_mem.go:36] "Initialized new in-memory state store" May 17 10:00:05.412093 kubelet[2279]: I0517 10:00:05.412044 2279 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 10:00:05.413310 kubelet[2279]: I0517 10:00:05.413111 2279 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 10:00:05.413310 kubelet[2279]: I0517 10:00:05.413140 2279 status_manager.go:227] "Starting to sync pod status with apiserver" May 17 10:00:05.413310 kubelet[2279]: I0517 10:00:05.413162 2279 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 17 10:00:05.413310 kubelet[2279]: I0517 10:00:05.413169 2279 kubelet.go:2382] "Starting kubelet main sync loop" May 17 10:00:05.413310 kubelet[2279]: E0517 10:00:05.413208 2279 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 10:00:05.471382 kubelet[2279]: I0517 10:00:05.471345 2279 policy_none.go:49] "None policy: Start" May 17 10:00:05.471602 kubelet[2279]: I0517 10:00:05.471589 2279 memory_manager.go:186] "Starting memorymanager" policy="None" May 17 10:00:05.471671 kubelet[2279]: I0517 10:00:05.471663 2279 state_mem.go:35] "Initializing new in-memory state store" May 17 10:00:05.471738 kubelet[2279]: W0517 10:00:05.471684 2279 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.72:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused May 17 10:00:05.471777 kubelet[2279]: E0517 10:00:05.471750 2279 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.72:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" May 17 10:00:05.477830 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 17 10:00:05.492788 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 17 10:00:05.496011 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 17 10:00:05.498598 kubelet[2279]: E0517 10:00:05.498556 2279 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 10:00:05.509511 kubelet[2279]: I0517 10:00:05.509463 2279 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 10:00:05.509739 kubelet[2279]: I0517 10:00:05.509718 2279 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 10:00:05.509799 kubelet[2279]: I0517 10:00:05.509737 2279 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 10:00:05.510149 kubelet[2279]: I0517 10:00:05.509963 2279 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 10:00:05.511109 kubelet[2279]: E0517 10:00:05.511064 2279 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 17 10:00:05.511180 kubelet[2279]: E0517 10:00:05.511140 2279 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 17 10:00:05.520865 systemd[1]: Created slice kubepods-burstable-podcf601c33179a75485fd3d15fcd4319ba.slice - libcontainer container kubepods-burstable-podcf601c33179a75485fd3d15fcd4319ba.slice. May 17 10:00:05.546321 kubelet[2279]: E0517 10:00:05.546267 2279 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 17 10:00:05.550120 systemd[1]: Created slice kubepods-burstable-pod7c751acbcd1525da2f1a64e395f86bdd.slice - libcontainer container kubepods-burstable-pod7c751acbcd1525da2f1a64e395f86bdd.slice. May 17 10:00:05.551784 kubelet[2279]: E0517 10:00:05.551736 2279 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 17 10:00:05.553634 systemd[1]: Created slice kubepods-burstable-pod447e79232307504a6964f3be51e3d64d.slice - libcontainer container kubepods-burstable-pod447e79232307504a6964f3be51e3d64d.slice. May 17 10:00:05.555505 kubelet[2279]: E0517 10:00:05.555334 2279 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 17 10:00:05.599094 kubelet[2279]: I0517 10:00:05.598984 2279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 10:00:05.599094 kubelet[2279]: I0517 10:00:05.599022 2279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 10:00:05.599094 kubelet[2279]: I0517 10:00:05.599042 2279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 10:00:05.599094 kubelet[2279]: I0517 10:00:05.599057 2279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cf601c33179a75485fd3d15fcd4319ba-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cf601c33179a75485fd3d15fcd4319ba\") " pod="kube-system/kube-apiserver-localhost" May 17 10:00:05.599094 kubelet[2279]: I0517 10:00:05.599072 2279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cf601c33179a75485fd3d15fcd4319ba-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cf601c33179a75485fd3d15fcd4319ba\") " pod="kube-system/kube-apiserver-localhost" May 17 10:00:05.599288 kubelet[2279]: I0517 10:00:05.599097 2279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 10:00:05.599288 kubelet[2279]: E0517 10:00:05.599103 2279 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.72:6443: connect: connection refused" interval="400ms" May 17 10:00:05.599288 kubelet[2279]: I0517 10:00:05.599114 2279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/447e79232307504a6964f3be51e3d64d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"447e79232307504a6964f3be51e3d64d\") " pod="kube-system/kube-scheduler-localhost" May 17 10:00:05.599288 kubelet[2279]: I0517 10:00:05.599156 2279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cf601c33179a75485fd3d15fcd4319ba-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cf601c33179a75485fd3d15fcd4319ba\") " pod="kube-system/kube-apiserver-localhost" May 17 10:00:05.599288 kubelet[2279]: I0517 10:00:05.599177 2279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 10:00:05.611060 kubelet[2279]: I0517 10:00:05.611037 2279 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 17 10:00:05.611600 kubelet[2279]: E0517 10:00:05.611577 2279 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.72:6443/api/v1/nodes\": dial tcp 10.0.0.72:6443: connect: connection refused" node="localhost" May 17 10:00:05.813505 kubelet[2279]: I0517 10:00:05.813465 2279 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 17 10:00:05.813906 kubelet[2279]: E0517 10:00:05.813860 2279 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.72:6443/api/v1/nodes\": dial tcp 10.0.0.72:6443: connect: connection refused" node="localhost" May 17 10:00:05.847731 containerd[1530]: time="2025-05-17T10:00:05.847683275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cf601c33179a75485fd3d15fcd4319ba,Namespace:kube-system,Attempt:0,}" May 17 10:00:05.852638 containerd[1530]: time="2025-05-17T10:00:05.852486523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7c751acbcd1525da2f1a64e395f86bdd,Namespace:kube-system,Attempt:0,}" May 17 10:00:05.856190 containerd[1530]: time="2025-05-17T10:00:05.856085859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:447e79232307504a6964f3be51e3d64d,Namespace:kube-system,Attempt:0,}" May 17 10:00:05.866925 containerd[1530]: time="2025-05-17T10:00:05.866885996Z" level=info msg="connecting to shim 40e408a342810849b45631cb1d99b352089c0376652fbd52ac46424f3317bf1b" address="unix:///run/containerd/s/20cf17aa1b7097f4864d0825aec299922d26ae731eeaf3ab7569370807eeb3a0" namespace=k8s.io protocol=ttrpc version=3 May 17 10:00:05.884152 containerd[1530]: time="2025-05-17T10:00:05.884101550Z" level=info msg="connecting to shim 1606301389a5836013f2c34834c3044475037f1c988ae47c60c0586792714c10" address="unix:///run/containerd/s/5d4efbaedc9468d73c3b19615d142f10e153a46b04ca336466b1c77428b8e1d4" namespace=k8s.io protocol=ttrpc version=3 May 17 10:00:05.887467 containerd[1530]: time="2025-05-17T10:00:05.887429356Z" level=info msg="connecting to shim ec9710d7f718d955f037f7fecb485571f9dc568e6b63cdbe5a7a0bfaad0ac48e" address="unix:///run/containerd/s/1b159b0377568c6f1e1e25a7e1ec93d5eafcc5ba1113ee2a9e415f79a277be26" namespace=k8s.io protocol=ttrpc version=3 May 17 10:00:05.891726 systemd[1]: Started cri-containerd-40e408a342810849b45631cb1d99b352089c0376652fbd52ac46424f3317bf1b.scope - libcontainer container 40e408a342810849b45631cb1d99b352089c0376652fbd52ac46424f3317bf1b. May 17 10:00:05.905629 systemd[1]: Started cri-containerd-1606301389a5836013f2c34834c3044475037f1c988ae47c60c0586792714c10.scope - libcontainer container 1606301389a5836013f2c34834c3044475037f1c988ae47c60c0586792714c10. May 17 10:00:05.909550 systemd[1]: Started cri-containerd-ec9710d7f718d955f037f7fecb485571f9dc568e6b63cdbe5a7a0bfaad0ac48e.scope - libcontainer container ec9710d7f718d955f037f7fecb485571f9dc568e6b63cdbe5a7a0bfaad0ac48e. May 17 10:00:05.941227 containerd[1530]: time="2025-05-17T10:00:05.941176603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cf601c33179a75485fd3d15fcd4319ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"40e408a342810849b45631cb1d99b352089c0376652fbd52ac46424f3317bf1b\"" May 17 10:00:05.948207 containerd[1530]: time="2025-05-17T10:00:05.948172913Z" level=info msg="CreateContainer within sandbox \"40e408a342810849b45631cb1d99b352089c0376652fbd52ac46424f3317bf1b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 17 10:00:05.948875 containerd[1530]: time="2025-05-17T10:00:05.948820846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7c751acbcd1525da2f1a64e395f86bdd,Namespace:kube-system,Attempt:0,} returns sandbox id \"1606301389a5836013f2c34834c3044475037f1c988ae47c60c0586792714c10\"" May 17 10:00:05.950524 containerd[1530]: time="2025-05-17T10:00:05.950482803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:447e79232307504a6964f3be51e3d64d,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec9710d7f718d955f037f7fecb485571f9dc568e6b63cdbe5a7a0bfaad0ac48e\"" May 17 10:00:05.951309 containerd[1530]: time="2025-05-17T10:00:05.951215710Z" level=info msg="CreateContainer within sandbox \"1606301389a5836013f2c34834c3044475037f1c988ae47c60c0586792714c10\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 17 10:00:05.952763 containerd[1530]: time="2025-05-17T10:00:05.952739174Z" level=info msg="CreateContainer within sandbox \"ec9710d7f718d955f037f7fecb485571f9dc568e6b63cdbe5a7a0bfaad0ac48e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 17 10:00:05.957768 containerd[1530]: time="2025-05-17T10:00:05.957739771Z" level=info msg="Container 6fa8c7fa03aac6a06d06a053c715c004ea75023f845ed13e4d8b0964714dd2fe: CDI devices from CRI Config.CDIDevices: []" May 17 10:00:05.959621 containerd[1530]: time="2025-05-17T10:00:05.959596590Z" level=info msg="Container ab2129e79a2cbacfc36eda01c3fe5df8f8dc7bd5b7f20f08700c47d491f0652d: CDI devices from CRI Config.CDIDevices: []" May 17 10:00:05.962065 containerd[1530]: time="2025-05-17T10:00:05.961978735Z" level=info msg="Container d9bf56b6edfc3ad1746f23084ef31b471a247e2140338547b28242752bf4ed06: CDI devices from CRI Config.CDIDevices: []" May 17 10:00:05.968217 containerd[1530]: time="2025-05-17T10:00:05.968136945Z" level=info msg="CreateContainer within sandbox \"40e408a342810849b45631cb1d99b352089c0376652fbd52ac46424f3317bf1b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6fa8c7fa03aac6a06d06a053c715c004ea75023f845ed13e4d8b0964714dd2fe\"" May 17 10:00:05.969374 containerd[1530]: time="2025-05-17T10:00:05.968902750Z" level=info msg="StartContainer for \"6fa8c7fa03aac6a06d06a053c715c004ea75023f845ed13e4d8b0964714dd2fe\"" May 17 10:00:05.969475 containerd[1530]: time="2025-05-17T10:00:05.969447334Z" level=info msg="CreateContainer within sandbox \"1606301389a5836013f2c34834c3044475037f1c988ae47c60c0586792714c10\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ab2129e79a2cbacfc36eda01c3fe5df8f8dc7bd5b7f20f08700c47d491f0652d\"" May 17 10:00:05.969809 containerd[1530]: time="2025-05-17T10:00:05.969786546Z" level=info msg="StartContainer for \"ab2129e79a2cbacfc36eda01c3fe5df8f8dc7bd5b7f20f08700c47d491f0652d\"" May 17 10:00:05.970831 containerd[1530]: time="2025-05-17T10:00:05.970804623Z" level=info msg="connecting to shim ab2129e79a2cbacfc36eda01c3fe5df8f8dc7bd5b7f20f08700c47d491f0652d" address="unix:///run/containerd/s/5d4efbaedc9468d73c3b19615d142f10e153a46b04ca336466b1c77428b8e1d4" protocol=ttrpc version=3 May 17 10:00:05.971231 containerd[1530]: time="2025-05-17T10:00:05.971184035Z" level=info msg="CreateContainer within sandbox \"ec9710d7f718d955f037f7fecb485571f9dc568e6b63cdbe5a7a0bfaad0ac48e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d9bf56b6edfc3ad1746f23084ef31b471a247e2140338547b28242752bf4ed06\"" May 17 10:00:05.971401 containerd[1530]: time="2025-05-17T10:00:05.971260864Z" level=info msg="connecting to shim 6fa8c7fa03aac6a06d06a053c715c004ea75023f845ed13e4d8b0964714dd2fe" address="unix:///run/containerd/s/20cf17aa1b7097f4864d0825aec299922d26ae731eeaf3ab7569370807eeb3a0" protocol=ttrpc version=3 May 17 10:00:05.971766 containerd[1530]: time="2025-05-17T10:00:05.971722040Z" level=info msg="StartContainer for \"d9bf56b6edfc3ad1746f23084ef31b471a247e2140338547b28242752bf4ed06\"" May 17 10:00:05.973471 containerd[1530]: time="2025-05-17T10:00:05.973437838Z" level=info msg="connecting to shim d9bf56b6edfc3ad1746f23084ef31b471a247e2140338547b28242752bf4ed06" address="unix:///run/containerd/s/1b159b0377568c6f1e1e25a7e1ec93d5eafcc5ba1113ee2a9e415f79a277be26" protocol=ttrpc version=3 May 17 10:00:05.998662 systemd[1]: Started cri-containerd-ab2129e79a2cbacfc36eda01c3fe5df8f8dc7bd5b7f20f08700c47d491f0652d.scope - libcontainer container ab2129e79a2cbacfc36eda01c3fe5df8f8dc7bd5b7f20f08700c47d491f0652d. May 17 10:00:05.999822 kubelet[2279]: E0517 10:00:05.999776 2279 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.72:6443: connect: connection refused" interval="800ms" May 17 10:00:06.003205 systemd[1]: Started cri-containerd-6fa8c7fa03aac6a06d06a053c715c004ea75023f845ed13e4d8b0964714dd2fe.scope - libcontainer container 6fa8c7fa03aac6a06d06a053c715c004ea75023f845ed13e4d8b0964714dd2fe. May 17 10:00:06.004389 systemd[1]: Started cri-containerd-d9bf56b6edfc3ad1746f23084ef31b471a247e2140338547b28242752bf4ed06.scope - libcontainer container d9bf56b6edfc3ad1746f23084ef31b471a247e2140338547b28242752bf4ed06. May 17 10:00:06.037782 containerd[1530]: time="2025-05-17T10:00:06.037743337Z" level=info msg="StartContainer for \"ab2129e79a2cbacfc36eda01c3fe5df8f8dc7bd5b7f20f08700c47d491f0652d\" returns successfully" May 17 10:00:06.047582 containerd[1530]: time="2025-05-17T10:00:06.047536066Z" level=info msg="StartContainer for \"6fa8c7fa03aac6a06d06a053c715c004ea75023f845ed13e4d8b0964714dd2fe\" returns successfully" May 17 10:00:06.057680 containerd[1530]: time="2025-05-17T10:00:06.057607965Z" level=info msg="StartContainer for \"d9bf56b6edfc3ad1746f23084ef31b471a247e2140338547b28242752bf4ed06\" returns successfully" May 17 10:00:06.216016 kubelet[2279]: I0517 10:00:06.215786 2279 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 17 10:00:06.216256 kubelet[2279]: E0517 10:00:06.216225 2279 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.72:6443/api/v1/nodes\": dial tcp 10.0.0.72:6443: connect: connection refused" node="localhost" May 17 10:00:06.419455 kubelet[2279]: E0517 10:00:06.419388 2279 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 17 10:00:06.422336 kubelet[2279]: E0517 10:00:06.422300 2279 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 17 10:00:06.425500 kubelet[2279]: E0517 10:00:06.424318 2279 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 17 10:00:07.019408 kubelet[2279]: I0517 10:00:07.019032 2279 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 17 10:00:07.389896 kubelet[2279]: I0517 10:00:07.389802 2279 apiserver.go:52] "Watching apiserver" May 17 10:00:07.426388 kubelet[2279]: E0517 10:00:07.426329 2279 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 17 10:00:07.428136 kubelet[2279]: E0517 10:00:07.428112 2279 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 17 10:00:07.433645 kubelet[2279]: E0517 10:00:07.433478 2279 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 17 10:00:07.498686 kubelet[2279]: I0517 10:00:07.498648 2279 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 17 10:00:07.583752 kubelet[2279]: I0517 10:00:07.583644 2279 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 17 10:00:07.598751 kubelet[2279]: I0517 10:00:07.598711 2279 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 17 10:00:07.607750 kubelet[2279]: E0517 10:00:07.607712 2279 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 17 10:00:07.607750 kubelet[2279]: I0517 10:00:07.607744 2279 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 17 10:00:07.609592 kubelet[2279]: E0517 10:00:07.609542 2279 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 17 10:00:07.609592 kubelet[2279]: I0517 10:00:07.609590 2279 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 17 10:00:07.612201 kubelet[2279]: E0517 10:00:07.612163 2279 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 17 10:00:08.427388 kubelet[2279]: I0517 10:00:08.427125 2279 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 17 10:00:08.980632 kubelet[2279]: I0517 10:00:08.980600 2279 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 17 10:00:09.529816 systemd[1]: Reload requested from client PID 2558 ('systemctl') (unit session-7.scope)... May 17 10:00:09.529833 systemd[1]: Reloading... May 17 10:00:09.606521 zram_generator::config[2601]: No configuration found. May 17 10:00:09.670060 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 10:00:09.767198 systemd[1]: Reloading finished in 237 ms. May 17 10:00:09.802008 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 17 10:00:09.817424 systemd[1]: kubelet.service: Deactivated successfully. May 17 10:00:09.817707 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 10:00:09.817760 systemd[1]: kubelet.service: Consumed 1.710s CPU time, 130M memory peak. May 17 10:00:09.819419 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 10:00:09.981444 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 10:00:09.993822 (kubelet)[2643]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 10:00:10.045455 kubelet[2643]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 10:00:10.045455 kubelet[2643]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 17 10:00:10.045455 kubelet[2643]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 10:00:10.045969 kubelet[2643]: I0517 10:00:10.045526 2643 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 10:00:10.052300 kubelet[2643]: I0517 10:00:10.052192 2643 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 17 10:00:10.052300 kubelet[2643]: I0517 10:00:10.052224 2643 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 10:00:10.052811 kubelet[2643]: I0517 10:00:10.052760 2643 server.go:954] "Client rotation is on, will bootstrap in background" May 17 10:00:10.054055 kubelet[2643]: I0517 10:00:10.054026 2643 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 17 10:00:10.057473 kubelet[2643]: I0517 10:00:10.057432 2643 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 10:00:10.061361 kubelet[2643]: I0517 10:00:10.061341 2643 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 17 10:00:10.064304 kubelet[2643]: I0517 10:00:10.064283 2643 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 10:00:10.064549 kubelet[2643]: I0517 10:00:10.064520 2643 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 10:00:10.064716 kubelet[2643]: I0517 10:00:10.064550 2643 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 10:00:10.064808 kubelet[2643]: I0517 10:00:10.064727 2643 topology_manager.go:138] "Creating topology manager with none policy" May 17 10:00:10.064808 kubelet[2643]: I0517 10:00:10.064736 2643 container_manager_linux.go:304] "Creating device plugin manager" May 17 10:00:10.064808 kubelet[2643]: I0517 10:00:10.064781 2643 state_mem.go:36] "Initialized new in-memory state store" May 17 10:00:10.064969 kubelet[2643]: I0517 10:00:10.064955 2643 kubelet.go:446] "Attempting to sync node with API server" May 17 10:00:10.065014 kubelet[2643]: I0517 10:00:10.065002 2643 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 10:00:10.065042 kubelet[2643]: I0517 10:00:10.065030 2643 kubelet.go:352] "Adding apiserver pod source" May 17 10:00:10.065065 kubelet[2643]: I0517 10:00:10.065044 2643 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 10:00:10.065877 kubelet[2643]: I0517 10:00:10.065850 2643 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 17 10:00:10.066513 kubelet[2643]: I0517 10:00:10.066393 2643 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 10:00:10.067783 kubelet[2643]: I0517 10:00:10.066801 2643 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 17 10:00:10.067783 kubelet[2643]: I0517 10:00:10.066840 2643 server.go:1287] "Started kubelet" May 17 10:00:10.067783 kubelet[2643]: I0517 10:00:10.067453 2643 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 10:00:10.069154 kubelet[2643]: I0517 10:00:10.069100 2643 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 17 10:00:10.069656 kubelet[2643]: I0517 10:00:10.069638 2643 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 10:00:10.070425 kubelet[2643]: I0517 10:00:10.070317 2643 server.go:479] "Adding debug handlers to kubelet server" May 17 10:00:10.072459 kubelet[2643]: I0517 10:00:10.072408 2643 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 10:00:10.074711 kubelet[2643]: I0517 10:00:10.074688 2643 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 10:00:10.076572 kubelet[2643]: E0517 10:00:10.076137 2643 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 10:00:10.076776 kubelet[2643]: I0517 10:00:10.076162 2643 volume_manager.go:297] "Starting Kubelet Volume Manager" May 17 10:00:10.076776 kubelet[2643]: I0517 10:00:10.076172 2643 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 17 10:00:10.076966 kubelet[2643]: I0517 10:00:10.076952 2643 reconciler.go:26] "Reconciler: start to sync state" May 17 10:00:10.078314 kubelet[2643]: E0517 10:00:10.078294 2643 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 10:00:10.078898 kubelet[2643]: I0517 10:00:10.078868 2643 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 10:00:10.092691 kubelet[2643]: I0517 10:00:10.092484 2643 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 10:00:10.093673 kubelet[2643]: I0517 10:00:10.093421 2643 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 10:00:10.093673 kubelet[2643]: I0517 10:00:10.093448 2643 status_manager.go:227] "Starting to sync pod status with apiserver" May 17 10:00:10.093673 kubelet[2643]: I0517 10:00:10.093466 2643 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 17 10:00:10.093673 kubelet[2643]: I0517 10:00:10.093472 2643 kubelet.go:2382] "Starting kubelet main sync loop" May 17 10:00:10.093673 kubelet[2643]: E0517 10:00:10.093527 2643 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 10:00:10.094202 kubelet[2643]: I0517 10:00:10.094040 2643 factory.go:221] Registration of the containerd container factory successfully May 17 10:00:10.094202 kubelet[2643]: I0517 10:00:10.094064 2643 factory.go:221] Registration of the systemd container factory successfully May 17 10:00:10.126822 kubelet[2643]: I0517 10:00:10.126781 2643 cpu_manager.go:221] "Starting CPU manager" policy="none" May 17 10:00:10.126822 kubelet[2643]: I0517 10:00:10.126811 2643 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 17 10:00:10.126977 kubelet[2643]: I0517 10:00:10.126845 2643 state_mem.go:36] "Initialized new in-memory state store" May 17 10:00:10.129238 kubelet[2643]: I0517 10:00:10.128914 2643 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 17 10:00:10.129238 kubelet[2643]: I0517 10:00:10.128943 2643 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 17 10:00:10.129238 kubelet[2643]: I0517 10:00:10.128962 2643 policy_none.go:49] "None policy: Start" May 17 10:00:10.129238 kubelet[2643]: I0517 10:00:10.128971 2643 memory_manager.go:186] "Starting memorymanager" policy="None" May 17 10:00:10.129238 kubelet[2643]: I0517 10:00:10.128982 2643 state_mem.go:35] "Initializing new in-memory state store" May 17 10:00:10.129238 kubelet[2643]: I0517 10:00:10.129090 2643 state_mem.go:75] "Updated machine memory state" May 17 10:00:10.132788 kubelet[2643]: I0517 10:00:10.132757 2643 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 10:00:10.133334 kubelet[2643]: I0517 10:00:10.132900 2643 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 10:00:10.133334 kubelet[2643]: I0517 10:00:10.132916 2643 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 10:00:10.134242 kubelet[2643]: I0517 10:00:10.134227 2643 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 10:00:10.134850 kubelet[2643]: E0517 10:00:10.134832 2643 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 17 10:00:10.194875 kubelet[2643]: I0517 10:00:10.194669 2643 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 17 10:00:10.194875 kubelet[2643]: I0517 10:00:10.194698 2643 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 17 10:00:10.194875 kubelet[2643]: I0517 10:00:10.194815 2643 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 17 10:00:10.200364 kubelet[2643]: E0517 10:00:10.200331 2643 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 17 10:00:10.200665 kubelet[2643]: E0517 10:00:10.200579 2643 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 17 10:00:10.235119 kubelet[2643]: I0517 10:00:10.235088 2643 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 17 10:00:10.241509 kubelet[2643]: I0517 10:00:10.241461 2643 kubelet_node_status.go:124] "Node was previously registered" node="localhost" May 17 10:00:10.241631 kubelet[2643]: I0517 10:00:10.241583 2643 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 17 10:00:10.279050 kubelet[2643]: I0517 10:00:10.278986 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cf601c33179a75485fd3d15fcd4319ba-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cf601c33179a75485fd3d15fcd4319ba\") " pod="kube-system/kube-apiserver-localhost" May 17 10:00:10.279050 kubelet[2643]: I0517 10:00:10.279041 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 10:00:10.279210 kubelet[2643]: I0517 10:00:10.279065 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 10:00:10.279210 kubelet[2643]: I0517 10:00:10.279088 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 10:00:10.279210 kubelet[2643]: I0517 10:00:10.279104 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 10:00:10.279210 kubelet[2643]: I0517 10:00:10.279137 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cf601c33179a75485fd3d15fcd4319ba-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cf601c33179a75485fd3d15fcd4319ba\") " pod="kube-system/kube-apiserver-localhost" May 17 10:00:10.279210 kubelet[2643]: I0517 10:00:10.279165 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cf601c33179a75485fd3d15fcd4319ba-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cf601c33179a75485fd3d15fcd4319ba\") " pod="kube-system/kube-apiserver-localhost" May 17 10:00:10.279325 kubelet[2643]: I0517 10:00:10.279182 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 10:00:10.279325 kubelet[2643]: I0517 10:00:10.279196 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/447e79232307504a6964f3be51e3d64d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"447e79232307504a6964f3be51e3d64d\") " pod="kube-system/kube-scheduler-localhost" May 17 10:00:10.536048 sudo[2678]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 17 10:00:10.536339 sudo[2678]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 17 10:00:10.968698 sudo[2678]: pam_unix(sudo:session): session closed for user root May 17 10:00:11.065863 kubelet[2643]: I0517 10:00:11.065816 2643 apiserver.go:52] "Watching apiserver" May 17 10:00:11.077769 kubelet[2643]: I0517 10:00:11.077715 2643 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 17 10:00:11.112588 kubelet[2643]: I0517 10:00:11.112363 2643 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 17 10:00:11.112588 kubelet[2643]: I0517 10:00:11.112484 2643 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 17 10:00:11.112588 kubelet[2643]: I0517 10:00:11.112459 2643 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 17 10:00:11.170022 kubelet[2643]: E0517 10:00:11.169750 2643 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 17 10:00:11.170022 kubelet[2643]: E0517 10:00:11.169924 2643 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 17 10:00:11.170252 kubelet[2643]: E0517 10:00:11.170231 2643 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 17 10:00:11.191417 kubelet[2643]: I0517 10:00:11.191323 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.191303019 podStartE2EDuration="3.191303019s" podCreationTimestamp="2025-05-17 10:00:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 10:00:11.170289641 +0000 UTC m=+1.173423701" watchObservedRunningTime="2025-05-17 10:00:11.191303019 +0000 UTC m=+1.194437079" May 17 10:00:11.213359 kubelet[2643]: I0517 10:00:11.213038 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.212995387 podStartE2EDuration="3.212995387s" podCreationTimestamp="2025-05-17 10:00:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 10:00:11.192061676 +0000 UTC m=+1.195195736" watchObservedRunningTime="2025-05-17 10:00:11.212995387 +0000 UTC m=+1.216129407" May 17 10:00:11.213359 kubelet[2643]: I0517 10:00:11.213230 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.213224735 podStartE2EDuration="1.213224735s" podCreationTimestamp="2025-05-17 10:00:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 10:00:11.212901822 +0000 UTC m=+1.216035842" watchObservedRunningTime="2025-05-17 10:00:11.213224735 +0000 UTC m=+1.216358795" May 17 10:00:12.856763 sudo[1735]: pam_unix(sudo:session): session closed for user root May 17 10:00:12.858022 sshd[1734]: Connection closed by 10.0.0.1 port 58884 May 17 10:00:12.859619 sshd-session[1732]: pam_unix(sshd:session): session closed for user core May 17 10:00:12.863721 systemd[1]: sshd@6-10.0.0.72:22-10.0.0.1:58884.service: Deactivated successfully. May 17 10:00:12.867405 systemd[1]: session-7.scope: Deactivated successfully. May 17 10:00:12.867849 systemd[1]: session-7.scope: Consumed 5.966s CPU time, 263.7M memory peak. May 17 10:00:12.870552 systemd-logind[1516]: Session 7 logged out. Waiting for processes to exit. May 17 10:00:12.871486 systemd-logind[1516]: Removed session 7. May 17 10:00:14.957556 kubelet[2643]: I0517 10:00:14.957524 2643 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 17 10:00:14.958020 containerd[1530]: time="2025-05-17T10:00:14.957829919Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 10:00:14.958676 kubelet[2643]: I0517 10:00:14.958357 2643 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 17 10:00:15.806420 systemd[1]: Created slice kubepods-besteffort-pod24a46269_0666_4074_8682_42bbaa705d20.slice - libcontainer container kubepods-besteffort-pod24a46269_0666_4074_8682_42bbaa705d20.slice. May 17 10:00:15.817618 kubelet[2643]: I0517 10:00:15.817300 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnbrk\" (UniqueName: \"kubernetes.io/projected/24a46269-0666-4074-8682-42bbaa705d20-kube-api-access-jnbrk\") pod \"kube-proxy-8l457\" (UID: \"24a46269-0666-4074-8682-42bbaa705d20\") " pod="kube-system/kube-proxy-8l457" May 17 10:00:15.817618 kubelet[2643]: I0517 10:00:15.817342 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1920fe25-22b4-4757-b4c3-9dad28aa1e5b-hubble-tls\") pod \"cilium-2gkg8\" (UID: \"1920fe25-22b4-4757-b4c3-9dad28aa1e5b\") " pod="kube-system/cilium-2gkg8" May 17 10:00:15.817618 kubelet[2643]: I0517 10:00:15.817372 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/24a46269-0666-4074-8682-42bbaa705d20-xtables-lock\") pod \"kube-proxy-8l457\" (UID: \"24a46269-0666-4074-8682-42bbaa705d20\") " pod="kube-system/kube-proxy-8l457" May 17 10:00:15.817618 kubelet[2643]: I0517 10:00:15.817400 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1920fe25-22b4-4757-b4c3-9dad28aa1e5b-bpf-maps\") pod \"cilium-2gkg8\" (UID: \"1920fe25-22b4-4757-b4c3-9dad28aa1e5b\") " pod="kube-system/cilium-2gkg8" May 17 10:00:15.817618 kubelet[2643]: I0517 10:00:15.817417 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1920fe25-22b4-4757-b4c3-9dad28aa1e5b-host-proc-sys-net\") pod \"cilium-2gkg8\" (UID: \"1920fe25-22b4-4757-b4c3-9dad28aa1e5b\") " pod="kube-system/cilium-2gkg8" May 17 10:00:15.817618 kubelet[2643]: I0517 10:00:15.817480 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1920fe25-22b4-4757-b4c3-9dad28aa1e5b-lib-modules\") pod \"cilium-2gkg8\" (UID: \"1920fe25-22b4-4757-b4c3-9dad28aa1e5b\") " pod="kube-system/cilium-2gkg8" May 17 10:00:15.817847 kubelet[2643]: I0517 10:00:15.817608 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/24a46269-0666-4074-8682-42bbaa705d20-kube-proxy\") pod \"kube-proxy-8l457\" (UID: \"24a46269-0666-4074-8682-42bbaa705d20\") " pod="kube-system/kube-proxy-8l457" May 17 10:00:15.817847 kubelet[2643]: I0517 10:00:15.817629 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1920fe25-22b4-4757-b4c3-9dad28aa1e5b-cilium-cgroup\") pod \"cilium-2gkg8\" (UID: \"1920fe25-22b4-4757-b4c3-9dad28aa1e5b\") " pod="kube-system/cilium-2gkg8" May 17 10:00:15.817847 kubelet[2643]: I0517 10:00:15.817686 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1920fe25-22b4-4757-b4c3-9dad28aa1e5b-cni-path\") pod \"cilium-2gkg8\" (UID: \"1920fe25-22b4-4757-b4c3-9dad28aa1e5b\") " pod="kube-system/cilium-2gkg8" May 17 10:00:15.817847 kubelet[2643]: I0517 10:00:15.817712 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1920fe25-22b4-4757-b4c3-9dad28aa1e5b-cilium-config-path\") pod \"cilium-2gkg8\" (UID: \"1920fe25-22b4-4757-b4c3-9dad28aa1e5b\") " pod="kube-system/cilium-2gkg8" May 17 10:00:15.817847 kubelet[2643]: I0517 10:00:15.817729 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1920fe25-22b4-4757-b4c3-9dad28aa1e5b-xtables-lock\") pod \"cilium-2gkg8\" (UID: \"1920fe25-22b4-4757-b4c3-9dad28aa1e5b\") " pod="kube-system/cilium-2gkg8" May 17 10:00:15.817847 kubelet[2643]: I0517 10:00:15.817744 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzp22\" (UniqueName: \"kubernetes.io/projected/1920fe25-22b4-4757-b4c3-9dad28aa1e5b-kube-api-access-mzp22\") pod \"cilium-2gkg8\" (UID: \"1920fe25-22b4-4757-b4c3-9dad28aa1e5b\") " pod="kube-system/cilium-2gkg8" May 17 10:00:15.817958 kubelet[2643]: I0517 10:00:15.817762 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1920fe25-22b4-4757-b4c3-9dad28aa1e5b-hostproc\") pod \"cilium-2gkg8\" (UID: \"1920fe25-22b4-4757-b4c3-9dad28aa1e5b\") " pod="kube-system/cilium-2gkg8" May 17 10:00:15.817958 kubelet[2643]: I0517 10:00:15.817778 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1920fe25-22b4-4757-b4c3-9dad28aa1e5b-host-proc-sys-kernel\") pod \"cilium-2gkg8\" (UID: \"1920fe25-22b4-4757-b4c3-9dad28aa1e5b\") " pod="kube-system/cilium-2gkg8" May 17 10:00:15.817958 kubelet[2643]: I0517 10:00:15.817793 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1920fe25-22b4-4757-b4c3-9dad28aa1e5b-clustermesh-secrets\") pod \"cilium-2gkg8\" (UID: \"1920fe25-22b4-4757-b4c3-9dad28aa1e5b\") " pod="kube-system/cilium-2gkg8" May 17 10:00:15.817958 kubelet[2643]: I0517 10:00:15.817807 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1920fe25-22b4-4757-b4c3-9dad28aa1e5b-etc-cni-netd\") pod \"cilium-2gkg8\" (UID: \"1920fe25-22b4-4757-b4c3-9dad28aa1e5b\") " pod="kube-system/cilium-2gkg8" May 17 10:00:15.817958 kubelet[2643]: I0517 10:00:15.817828 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/24a46269-0666-4074-8682-42bbaa705d20-lib-modules\") pod \"kube-proxy-8l457\" (UID: \"24a46269-0666-4074-8682-42bbaa705d20\") " pod="kube-system/kube-proxy-8l457" May 17 10:00:15.817958 kubelet[2643]: I0517 10:00:15.817844 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1920fe25-22b4-4757-b4c3-9dad28aa1e5b-cilium-run\") pod \"cilium-2gkg8\" (UID: \"1920fe25-22b4-4757-b4c3-9dad28aa1e5b\") " pod="kube-system/cilium-2gkg8" May 17 10:00:15.826405 systemd[1]: Created slice kubepods-burstable-pod1920fe25_22b4_4757_b4c3_9dad28aa1e5b.slice - libcontainer container kubepods-burstable-pod1920fe25_22b4_4757_b4c3_9dad28aa1e5b.slice. May 17 10:00:15.988686 systemd[1]: Created slice kubepods-besteffort-podf25b87e9_cd56_46cd_a07d_45fb46b3797d.slice - libcontainer container kubepods-besteffort-podf25b87e9_cd56_46cd_a07d_45fb46b3797d.slice. May 17 10:00:16.018831 kubelet[2643]: I0517 10:00:16.018784 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvmp6\" (UniqueName: \"kubernetes.io/projected/f25b87e9-cd56-46cd-a07d-45fb46b3797d-kube-api-access-jvmp6\") pod \"cilium-operator-6c4d7847fc-6f5mq\" (UID: \"f25b87e9-cd56-46cd-a07d-45fb46b3797d\") " pod="kube-system/cilium-operator-6c4d7847fc-6f5mq" May 17 10:00:16.018831 kubelet[2643]: I0517 10:00:16.018831 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f25b87e9-cd56-46cd-a07d-45fb46b3797d-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-6f5mq\" (UID: \"f25b87e9-cd56-46cd-a07d-45fb46b3797d\") " pod="kube-system/cilium-operator-6c4d7847fc-6f5mq" May 17 10:00:16.120718 containerd[1530]: time="2025-05-17T10:00:16.120601227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8l457,Uid:24a46269-0666-4074-8682-42bbaa705d20,Namespace:kube-system,Attempt:0,}" May 17 10:00:16.129749 containerd[1530]: time="2025-05-17T10:00:16.129582101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2gkg8,Uid:1920fe25-22b4-4757-b4c3-9dad28aa1e5b,Namespace:kube-system,Attempt:0,}" May 17 10:00:16.137303 containerd[1530]: time="2025-05-17T10:00:16.137269101Z" level=info msg="connecting to shim 6fade1656cd66f803c9b7ded12c2a3e7880b6708006b5100df529d758613d232" address="unix:///run/containerd/s/ee71b9e20e7063a2592b2ab9eb558abb050c5e52873845c56ee41bf1b5a2c039" namespace=k8s.io protocol=ttrpc version=3 May 17 10:00:16.145533 containerd[1530]: time="2025-05-17T10:00:16.145481307Z" level=info msg="connecting to shim 409292bd8d24d9b334e9d431a35ace55a101fb6f39941e6c60befb79ca895697" address="unix:///run/containerd/s/cd8f4b6d6c96b511fe3c71121fd148e634c0f73acb4cdaba2fd57ec989dd6482" namespace=k8s.io protocol=ttrpc version=3 May 17 10:00:16.159641 systemd[1]: Started cri-containerd-6fade1656cd66f803c9b7ded12c2a3e7880b6708006b5100df529d758613d232.scope - libcontainer container 6fade1656cd66f803c9b7ded12c2a3e7880b6708006b5100df529d758613d232. May 17 10:00:16.162475 systemd[1]: Started cri-containerd-409292bd8d24d9b334e9d431a35ace55a101fb6f39941e6c60befb79ca895697.scope - libcontainer container 409292bd8d24d9b334e9d431a35ace55a101fb6f39941e6c60befb79ca895697. May 17 10:00:16.186460 containerd[1530]: time="2025-05-17T10:00:16.186408272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2gkg8,Uid:1920fe25-22b4-4757-b4c3-9dad28aa1e5b,Namespace:kube-system,Attempt:0,} returns sandbox id \"409292bd8d24d9b334e9d431a35ace55a101fb6f39941e6c60befb79ca895697\"" May 17 10:00:16.188486 containerd[1530]: time="2025-05-17T10:00:16.188460363Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 17 10:00:16.188570 containerd[1530]: time="2025-05-17T10:00:16.188534883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8l457,Uid:24a46269-0666-4074-8682-42bbaa705d20,Namespace:kube-system,Attempt:0,} returns sandbox id \"6fade1656cd66f803c9b7ded12c2a3e7880b6708006b5100df529d758613d232\"" May 17 10:00:16.193054 containerd[1530]: time="2025-05-17T10:00:16.192980232Z" level=info msg="CreateContainer within sandbox \"6fade1656cd66f803c9b7ded12c2a3e7880b6708006b5100df529d758613d232\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 10:00:16.200845 containerd[1530]: time="2025-05-17T10:00:16.200819516Z" level=info msg="Container 2f75e5f5e184ec094254d04fada05f78dc4ab9a58f64d25c0d260f52b7770d05: CDI devices from CRI Config.CDIDevices: []" May 17 10:00:16.223371 containerd[1530]: time="2025-05-17T10:00:16.223336050Z" level=info msg="CreateContainer within sandbox \"6fade1656cd66f803c9b7ded12c2a3e7880b6708006b5100df529d758613d232\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2f75e5f5e184ec094254d04fada05f78dc4ab9a58f64d25c0d260f52b7770d05\"" May 17 10:00:16.224029 containerd[1530]: time="2025-05-17T10:00:16.224006973Z" level=info msg="StartContainer for \"2f75e5f5e184ec094254d04fada05f78dc4ab9a58f64d25c0d260f52b7770d05\"" May 17 10:00:16.225565 containerd[1530]: time="2025-05-17T10:00:16.225511073Z" level=info msg="connecting to shim 2f75e5f5e184ec094254d04fada05f78dc4ab9a58f64d25c0d260f52b7770d05" address="unix:///run/containerd/s/ee71b9e20e7063a2592b2ab9eb558abb050c5e52873845c56ee41bf1b5a2c039" protocol=ttrpc version=3 May 17 10:00:16.247746 systemd[1]: Started cri-containerd-2f75e5f5e184ec094254d04fada05f78dc4ab9a58f64d25c0d260f52b7770d05.scope - libcontainer container 2f75e5f5e184ec094254d04fada05f78dc4ab9a58f64d25c0d260f52b7770d05. May 17 10:00:16.282432 containerd[1530]: time="2025-05-17T10:00:16.282378529Z" level=info msg="StartContainer for \"2f75e5f5e184ec094254d04fada05f78dc4ab9a58f64d25c0d260f52b7770d05\" returns successfully" May 17 10:00:16.293266 containerd[1530]: time="2025-05-17T10:00:16.293239948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-6f5mq,Uid:f25b87e9-cd56-46cd-a07d-45fb46b3797d,Namespace:kube-system,Attempt:0,}" May 17 10:00:16.317675 containerd[1530]: time="2025-05-17T10:00:16.317637909Z" level=info msg="connecting to shim 9e040522493d0e010f38d0786c632b05eacee7ec516df5d77968906225a766d6" address="unix:///run/containerd/s/6eaaa3c3e4a11cbde5f15f88779b0d4eb4d459cdae119848cf2702a07d3ceee3" namespace=k8s.io protocol=ttrpc version=3 May 17 10:00:16.343725 systemd[1]: Started cri-containerd-9e040522493d0e010f38d0786c632b05eacee7ec516df5d77968906225a766d6.scope - libcontainer container 9e040522493d0e010f38d0786c632b05eacee7ec516df5d77968906225a766d6. May 17 10:00:16.383667 containerd[1530]: time="2025-05-17T10:00:16.382243540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-6f5mq,Uid:f25b87e9-cd56-46cd-a07d-45fb46b3797d,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e040522493d0e010f38d0786c632b05eacee7ec516df5d77968906225a766d6\"" May 17 10:00:17.135322 kubelet[2643]: I0517 10:00:17.135262 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8l457" podStartSLOduration=2.135246602 podStartE2EDuration="2.135246602s" podCreationTimestamp="2025-05-17 10:00:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 10:00:17.135012444 +0000 UTC m=+7.138146504" watchObservedRunningTime="2025-05-17 10:00:17.135246602 +0000 UTC m=+7.138380662" May 17 10:00:22.104779 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1301640051.mount: Deactivated successfully. May 17 10:00:24.630847 containerd[1530]: time="2025-05-17T10:00:24.630793124Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 10:00:24.631402 containerd[1530]: time="2025-05-17T10:00:24.631349868Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 17 10:00:24.632088 containerd[1530]: time="2025-05-17T10:00:24.632042027Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 10:00:24.633952 containerd[1530]: time="2025-05-17T10:00:24.633913241Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.445420406s" May 17 10:00:24.633952 containerd[1530]: time="2025-05-17T10:00:24.633951427Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 17 10:00:24.640607 containerd[1530]: time="2025-05-17T10:00:24.640562037Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 17 10:00:24.658824 containerd[1530]: time="2025-05-17T10:00:24.658782112Z" level=info msg="CreateContainer within sandbox \"409292bd8d24d9b334e9d431a35ace55a101fb6f39941e6c60befb79ca895697\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 10:00:24.703360 containerd[1530]: time="2025-05-17T10:00:24.703269387Z" level=info msg="Container 965ba86e9ee419ca8d015f538f8f6a309b415c3ba0531f4733ac9ac93cf200db: CDI devices from CRI Config.CDIDevices: []" May 17 10:00:24.718233 containerd[1530]: time="2025-05-17T10:00:24.718180455Z" level=info msg="CreateContainer within sandbox \"409292bd8d24d9b334e9d431a35ace55a101fb6f39941e6c60befb79ca895697\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"965ba86e9ee419ca8d015f538f8f6a309b415c3ba0531f4733ac9ac93cf200db\"" May 17 10:00:24.718825 containerd[1530]: time="2025-05-17T10:00:24.718764058Z" level=info msg="StartContainer for \"965ba86e9ee419ca8d015f538f8f6a309b415c3ba0531f4733ac9ac93cf200db\"" May 17 10:00:24.719679 containerd[1530]: time="2025-05-17T10:00:24.719642065Z" level=info msg="connecting to shim 965ba86e9ee419ca8d015f538f8f6a309b415c3ba0531f4733ac9ac93cf200db" address="unix:///run/containerd/s/cd8f4b6d6c96b511fe3c71121fd148e634c0f73acb4cdaba2fd57ec989dd6482" protocol=ttrpc version=3 May 17 10:00:24.769774 systemd[1]: Started cri-containerd-965ba86e9ee419ca8d015f538f8f6a309b415c3ba0531f4733ac9ac93cf200db.scope - libcontainer container 965ba86e9ee419ca8d015f538f8f6a309b415c3ba0531f4733ac9ac93cf200db. May 17 10:00:24.804416 containerd[1530]: time="2025-05-17T10:00:24.803436712Z" level=info msg="StartContainer for \"965ba86e9ee419ca8d015f538f8f6a309b415c3ba0531f4733ac9ac93cf200db\" returns successfully" May 17 10:00:24.861977 systemd[1]: cri-containerd-965ba86e9ee419ca8d015f538f8f6a309b415c3ba0531f4733ac9ac93cf200db.scope: Deactivated successfully. May 17 10:00:24.888805 containerd[1530]: time="2025-05-17T10:00:24.888668073Z" level=info msg="TaskExit event in podsandbox handler container_id:\"965ba86e9ee419ca8d015f538f8f6a309b415c3ba0531f4733ac9ac93cf200db\" id:\"965ba86e9ee419ca8d015f538f8f6a309b415c3ba0531f4733ac9ac93cf200db\" pid:3059 exited_at:{seconds:1747476024 nanos:878538671}" May 17 10:00:24.889900 containerd[1530]: time="2025-05-17T10:00:24.889862539Z" level=info msg="received exit event container_id:\"965ba86e9ee419ca8d015f538f8f6a309b415c3ba0531f4733ac9ac93cf200db\" id:\"965ba86e9ee419ca8d015f538f8f6a309b415c3ba0531f4733ac9ac93cf200db\" pid:3059 exited_at:{seconds:1747476024 nanos:878538671}" May 17 10:00:24.930514 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-965ba86e9ee419ca8d015f538f8f6a309b415c3ba0531f4733ac9ac93cf200db-rootfs.mount: Deactivated successfully. May 17 10:00:25.859062 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3052148367.mount: Deactivated successfully. May 17 10:00:26.195295 containerd[1530]: time="2025-05-17T10:00:26.195103120Z" level=info msg="CreateContainer within sandbox \"409292bd8d24d9b334e9d431a35ace55a101fb6f39941e6c60befb79ca895697\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 10:00:26.223341 containerd[1530]: time="2025-05-17T10:00:26.222645696Z" level=info msg="Container b1f66cc4cb30c2f4e6e0ca2411874645ffedc2acb0172803cade71df0c6a6619: CDI devices from CRI Config.CDIDevices: []" May 17 10:00:26.226657 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2701440390.mount: Deactivated successfully. May 17 10:00:26.230223 containerd[1530]: time="2025-05-17T10:00:26.230187988Z" level=info msg="CreateContainer within sandbox \"409292bd8d24d9b334e9d431a35ace55a101fb6f39941e6c60befb79ca895697\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b1f66cc4cb30c2f4e6e0ca2411874645ffedc2acb0172803cade71df0c6a6619\"" May 17 10:00:26.230960 containerd[1530]: time="2025-05-17T10:00:26.230936494Z" level=info msg="StartContainer for \"b1f66cc4cb30c2f4e6e0ca2411874645ffedc2acb0172803cade71df0c6a6619\"" May 17 10:00:26.232125 containerd[1530]: time="2025-05-17T10:00:26.232086449Z" level=info msg="connecting to shim b1f66cc4cb30c2f4e6e0ca2411874645ffedc2acb0172803cade71df0c6a6619" address="unix:///run/containerd/s/cd8f4b6d6c96b511fe3c71121fd148e634c0f73acb4cdaba2fd57ec989dd6482" protocol=ttrpc version=3 May 17 10:00:26.256661 systemd[1]: Started cri-containerd-b1f66cc4cb30c2f4e6e0ca2411874645ffedc2acb0172803cade71df0c6a6619.scope - libcontainer container b1f66cc4cb30c2f4e6e0ca2411874645ffedc2acb0172803cade71df0c6a6619. May 17 10:00:26.302564 containerd[1530]: time="2025-05-17T10:00:26.302296569Z" level=info msg="StartContainer for \"b1f66cc4cb30c2f4e6e0ca2411874645ffedc2acb0172803cade71df0c6a6619\" returns successfully" May 17 10:00:26.343427 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 10:00:26.343902 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 17 10:00:26.344429 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 17 10:00:26.346809 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 10:00:26.349673 systemd[1]: cri-containerd-b1f66cc4cb30c2f4e6e0ca2411874645ffedc2acb0172803cade71df0c6a6619.scope: Deactivated successfully. May 17 10:00:26.350031 systemd[1]: cri-containerd-b1f66cc4cb30c2f4e6e0ca2411874645ffedc2acb0172803cade71df0c6a6619.scope: Consumed 41ms CPU time, 6.6M memory peak, 2.3M written to disk. May 17 10:00:26.351086 containerd[1530]: time="2025-05-17T10:00:26.351044537Z" level=info msg="received exit event container_id:\"b1f66cc4cb30c2f4e6e0ca2411874645ffedc2acb0172803cade71df0c6a6619\" id:\"b1f66cc4cb30c2f4e6e0ca2411874645ffedc2acb0172803cade71df0c6a6619\" pid:3121 exited_at:{seconds:1747476026 nanos:350840850}" May 17 10:00:26.351294 containerd[1530]: time="2025-05-17T10:00:26.351091966Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b1f66cc4cb30c2f4e6e0ca2411874645ffedc2acb0172803cade71df0c6a6619\" id:\"b1f66cc4cb30c2f4e6e0ca2411874645ffedc2acb0172803cade71df0c6a6619\" pid:3121 exited_at:{seconds:1747476026 nanos:350840850}" May 17 10:00:26.387163 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 10:00:26.616375 containerd[1530]: time="2025-05-17T10:00:26.616322654Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 10:00:26.619909 containerd[1530]: time="2025-05-17T10:00:26.619875665Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 17 10:00:26.620753 containerd[1530]: time="2025-05-17T10:00:26.620705021Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 10:00:26.621972 containerd[1530]: time="2025-05-17T10:00:26.621872787Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.981262077s" May 17 10:00:26.621972 containerd[1530]: time="2025-05-17T10:00:26.621907529Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 17 10:00:26.624708 containerd[1530]: time="2025-05-17T10:00:26.624664284Z" level=info msg="CreateContainer within sandbox \"9e040522493d0e010f38d0786c632b05eacee7ec516df5d77968906225a766d6\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 17 10:00:26.635135 containerd[1530]: time="2025-05-17T10:00:26.635057190Z" level=info msg="Container 97083f3ba7d92c6dd98c87c8954c151ad9ccbc4e60c09fe88ef67d0792688d20: CDI devices from CRI Config.CDIDevices: []" May 17 10:00:26.641029 containerd[1530]: time="2025-05-17T10:00:26.640982756Z" level=info msg="CreateContainer within sandbox \"9e040522493d0e010f38d0786c632b05eacee7ec516df5d77968906225a766d6\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"97083f3ba7d92c6dd98c87c8954c151ad9ccbc4e60c09fe88ef67d0792688d20\"" May 17 10:00:26.641501 containerd[1530]: time="2025-05-17T10:00:26.641421229Z" level=info msg="StartContainer for \"97083f3ba7d92c6dd98c87c8954c151ad9ccbc4e60c09fe88ef67d0792688d20\"" May 17 10:00:26.642441 containerd[1530]: time="2025-05-17T10:00:26.642412326Z" level=info msg="connecting to shim 97083f3ba7d92c6dd98c87c8954c151ad9ccbc4e60c09fe88ef67d0792688d20" address="unix:///run/containerd/s/6eaaa3c3e4a11cbde5f15f88779b0d4eb4d459cdae119848cf2702a07d3ceee3" protocol=ttrpc version=3 May 17 10:00:26.664766 systemd[1]: Started cri-containerd-97083f3ba7d92c6dd98c87c8954c151ad9ccbc4e60c09fe88ef67d0792688d20.scope - libcontainer container 97083f3ba7d92c6dd98c87c8954c151ad9ccbc4e60c09fe88ef67d0792688d20. May 17 10:00:26.688993 containerd[1530]: time="2025-05-17T10:00:26.688957963Z" level=info msg="StartContainer for \"97083f3ba7d92c6dd98c87c8954c151ad9ccbc4e60c09fe88ef67d0792688d20\" returns successfully" May 17 10:00:26.858708 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b1f66cc4cb30c2f4e6e0ca2411874645ffedc2acb0172803cade71df0c6a6619-rootfs.mount: Deactivated successfully. May 17 10:00:27.202571 containerd[1530]: time="2025-05-17T10:00:27.202478862Z" level=info msg="CreateContainer within sandbox \"409292bd8d24d9b334e9d431a35ace55a101fb6f39941e6c60befb79ca895697\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 10:00:27.214470 containerd[1530]: time="2025-05-17T10:00:27.214423759Z" level=info msg="Container 62cbf4955f577dee8c9df8b6026c89970239365efb1fe83c2619fa3f1e9ba2cd: CDI devices from CRI Config.CDIDevices: []" May 17 10:00:27.230517 kubelet[2643]: I0517 10:00:27.229472 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-6f5mq" podStartSLOduration=1.992556758 podStartE2EDuration="12.229451917s" podCreationTimestamp="2025-05-17 10:00:15 +0000 UTC" firstStartedPulling="2025-05-17 10:00:16.385729976 +0000 UTC m=+6.388864036" lastFinishedPulling="2025-05-17 10:00:26.622625175 +0000 UTC m=+16.625759195" observedRunningTime="2025-05-17 10:00:27.229391321 +0000 UTC m=+17.232525421" watchObservedRunningTime="2025-05-17 10:00:27.229451917 +0000 UTC m=+17.232585977" May 17 10:00:27.231184 containerd[1530]: time="2025-05-17T10:00:27.231123985Z" level=info msg="CreateContainer within sandbox \"409292bd8d24d9b334e9d431a35ace55a101fb6f39941e6c60befb79ca895697\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"62cbf4955f577dee8c9df8b6026c89970239365efb1fe83c2619fa3f1e9ba2cd\"" May 17 10:00:27.231800 containerd[1530]: time="2025-05-17T10:00:27.231757879Z" level=info msg="StartContainer for \"62cbf4955f577dee8c9df8b6026c89970239365efb1fe83c2619fa3f1e9ba2cd\"" May 17 10:00:27.234918 containerd[1530]: time="2025-05-17T10:00:27.234470442Z" level=info msg="connecting to shim 62cbf4955f577dee8c9df8b6026c89970239365efb1fe83c2619fa3f1e9ba2cd" address="unix:///run/containerd/s/cd8f4b6d6c96b511fe3c71121fd148e634c0f73acb4cdaba2fd57ec989dd6482" protocol=ttrpc version=3 May 17 10:00:27.253678 systemd[1]: Started cri-containerd-62cbf4955f577dee8c9df8b6026c89970239365efb1fe83c2619fa3f1e9ba2cd.scope - libcontainer container 62cbf4955f577dee8c9df8b6026c89970239365efb1fe83c2619fa3f1e9ba2cd. May 17 10:00:27.322116 containerd[1530]: time="2025-05-17T10:00:27.322079116Z" level=info msg="StartContainer for \"62cbf4955f577dee8c9df8b6026c89970239365efb1fe83c2619fa3f1e9ba2cd\" returns successfully" May 17 10:00:27.337220 systemd[1]: cri-containerd-62cbf4955f577dee8c9df8b6026c89970239365efb1fe83c2619fa3f1e9ba2cd.scope: Deactivated successfully. May 17 10:00:27.345009 containerd[1530]: time="2025-05-17T10:00:27.344846526Z" level=info msg="received exit event container_id:\"62cbf4955f577dee8c9df8b6026c89970239365efb1fe83c2619fa3f1e9ba2cd\" id:\"62cbf4955f577dee8c9df8b6026c89970239365efb1fe83c2619fa3f1e9ba2cd\" pid:3206 exited_at:{seconds:1747476027 nanos:344645848}" May 17 10:00:27.345009 containerd[1530]: time="2025-05-17T10:00:27.344923332Z" level=info msg="TaskExit event in podsandbox handler container_id:\"62cbf4955f577dee8c9df8b6026c89970239365efb1fe83c2619fa3f1e9ba2cd\" id:\"62cbf4955f577dee8c9df8b6026c89970239365efb1fe83c2619fa3f1e9ba2cd\" pid:3206 exited_at:{seconds:1747476027 nanos:344645848}" May 17 10:00:27.855906 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-62cbf4955f577dee8c9df8b6026c89970239365efb1fe83c2619fa3f1e9ba2cd-rootfs.mount: Deactivated successfully. May 17 10:00:28.205819 containerd[1530]: time="2025-05-17T10:00:28.205720072Z" level=info msg="CreateContainer within sandbox \"409292bd8d24d9b334e9d431a35ace55a101fb6f39941e6c60befb79ca895697\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 10:00:28.222847 containerd[1530]: time="2025-05-17T10:00:28.222796458Z" level=info msg="Container 0631445707bc10ad5f67afd575832bd7f30d5a7de2a360869f9b15232f12b0ca: CDI devices from CRI Config.CDIDevices: []" May 17 10:00:28.224719 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount400730052.mount: Deactivated successfully. May 17 10:00:28.233116 containerd[1530]: time="2025-05-17T10:00:28.233073707Z" level=info msg="CreateContainer within sandbox \"409292bd8d24d9b334e9d431a35ace55a101fb6f39941e6c60befb79ca895697\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0631445707bc10ad5f67afd575832bd7f30d5a7de2a360869f9b15232f12b0ca\"" May 17 10:00:28.233736 containerd[1530]: time="2025-05-17T10:00:28.233716988Z" level=info msg="StartContainer for \"0631445707bc10ad5f67afd575832bd7f30d5a7de2a360869f9b15232f12b0ca\"" May 17 10:00:28.234893 containerd[1530]: time="2025-05-17T10:00:28.234860870Z" level=info msg="connecting to shim 0631445707bc10ad5f67afd575832bd7f30d5a7de2a360869f9b15232f12b0ca" address="unix:///run/containerd/s/cd8f4b6d6c96b511fe3c71121fd148e634c0f73acb4cdaba2fd57ec989dd6482" protocol=ttrpc version=3 May 17 10:00:28.257676 systemd[1]: Started cri-containerd-0631445707bc10ad5f67afd575832bd7f30d5a7de2a360869f9b15232f12b0ca.scope - libcontainer container 0631445707bc10ad5f67afd575832bd7f30d5a7de2a360869f9b15232f12b0ca. May 17 10:00:28.278942 systemd[1]: cri-containerd-0631445707bc10ad5f67afd575832bd7f30d5a7de2a360869f9b15232f12b0ca.scope: Deactivated successfully. May 17 10:00:28.280727 containerd[1530]: time="2025-05-17T10:00:28.280616114Z" level=info msg="received exit event container_id:\"0631445707bc10ad5f67afd575832bd7f30d5a7de2a360869f9b15232f12b0ca\" id:\"0631445707bc10ad5f67afd575832bd7f30d5a7de2a360869f9b15232f12b0ca\" pid:3247 exited_at:{seconds:1747476028 nanos:279786528}" May 17 10:00:28.280727 containerd[1530]: time="2025-05-17T10:00:28.280659538Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0631445707bc10ad5f67afd575832bd7f30d5a7de2a360869f9b15232f12b0ca\" id:\"0631445707bc10ad5f67afd575832bd7f30d5a7de2a360869f9b15232f12b0ca\" pid:3247 exited_at:{seconds:1747476028 nanos:279786528}" May 17 10:00:28.295535 containerd[1530]: time="2025-05-17T10:00:28.294846221Z" level=info msg="StartContainer for \"0631445707bc10ad5f67afd575832bd7f30d5a7de2a360869f9b15232f12b0ca\" returns successfully" May 17 10:00:28.489770 update_engine[1517]: I20250517 10:00:28.488862 1517 update_attempter.cc:509] Updating boot flags... May 17 10:00:28.855945 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0631445707bc10ad5f67afd575832bd7f30d5a7de2a360869f9b15232f12b0ca-rootfs.mount: Deactivated successfully. May 17 10:00:29.210530 containerd[1530]: time="2025-05-17T10:00:29.210425310Z" level=info msg="CreateContainer within sandbox \"409292bd8d24d9b334e9d431a35ace55a101fb6f39941e6c60befb79ca895697\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 10:00:29.223145 containerd[1530]: time="2025-05-17T10:00:29.223098475Z" level=info msg="Container 851e5e58609a05b1376b2775237b4fe5500e4ffee367db76120904f0dbaed12d: CDI devices from CRI Config.CDIDevices: []" May 17 10:00:29.230881 containerd[1530]: time="2025-05-17T10:00:29.230830722Z" level=info msg="CreateContainer within sandbox \"409292bd8d24d9b334e9d431a35ace55a101fb6f39941e6c60befb79ca895697\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"851e5e58609a05b1376b2775237b4fe5500e4ffee367db76120904f0dbaed12d\"" May 17 10:00:29.231567 containerd[1530]: time="2025-05-17T10:00:29.231470263Z" level=info msg="StartContainer for \"851e5e58609a05b1376b2775237b4fe5500e4ffee367db76120904f0dbaed12d\"" May 17 10:00:29.232835 containerd[1530]: time="2025-05-17T10:00:29.232546597Z" level=info msg="connecting to shim 851e5e58609a05b1376b2775237b4fe5500e4ffee367db76120904f0dbaed12d" address="unix:///run/containerd/s/cd8f4b6d6c96b511fe3c71121fd148e634c0f73acb4cdaba2fd57ec989dd6482" protocol=ttrpc version=3 May 17 10:00:29.249654 systemd[1]: Started cri-containerd-851e5e58609a05b1376b2775237b4fe5500e4ffee367db76120904f0dbaed12d.scope - libcontainer container 851e5e58609a05b1376b2775237b4fe5500e4ffee367db76120904f0dbaed12d. May 17 10:00:29.309159 containerd[1530]: time="2025-05-17T10:00:29.309056234Z" level=info msg="StartContainer for \"851e5e58609a05b1376b2775237b4fe5500e4ffee367db76120904f0dbaed12d\" returns successfully" May 17 10:00:29.420561 containerd[1530]: time="2025-05-17T10:00:29.420141926Z" level=info msg="TaskExit event in podsandbox handler container_id:\"851e5e58609a05b1376b2775237b4fe5500e4ffee367db76120904f0dbaed12d\" id:\"8116043a8124aa2581bebfb5e44aab065d8aa40a0af6a950b5094143b3ba35d7\" pid:3333 exited_at:{seconds:1747476029 nanos:419886349}" May 17 10:00:29.518972 kubelet[2643]: I0517 10:00:29.518879 2643 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 17 10:00:29.586115 systemd[1]: Created slice kubepods-burstable-pod9702070e_6573_4eff_ab20_3d2bad999611.slice - libcontainer container kubepods-burstable-pod9702070e_6573_4eff_ab20_3d2bad999611.slice. May 17 10:00:29.593517 systemd[1]: Created slice kubepods-burstable-podba0ed6c2_f523_43ac_b7eb_41400a29d7f2.slice - libcontainer container kubepods-burstable-podba0ed6c2_f523_43ac_b7eb_41400a29d7f2.slice. May 17 10:00:29.617687 kubelet[2643]: I0517 10:00:29.617649 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9702070e-6573-4eff-ab20-3d2bad999611-config-volume\") pod \"coredns-668d6bf9bc-m4sj6\" (UID: \"9702070e-6573-4eff-ab20-3d2bad999611\") " pod="kube-system/coredns-668d6bf9bc-m4sj6" May 17 10:00:29.617907 kubelet[2643]: I0517 10:00:29.617827 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wtq4\" (UniqueName: \"kubernetes.io/projected/ba0ed6c2-f523-43ac-b7eb-41400a29d7f2-kube-api-access-8wtq4\") pod \"coredns-668d6bf9bc-kkv7k\" (UID: \"ba0ed6c2-f523-43ac-b7eb-41400a29d7f2\") " pod="kube-system/coredns-668d6bf9bc-kkv7k" May 17 10:00:29.617907 kubelet[2643]: I0517 10:00:29.617854 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv8dd\" (UniqueName: \"kubernetes.io/projected/9702070e-6573-4eff-ab20-3d2bad999611-kube-api-access-kv8dd\") pod \"coredns-668d6bf9bc-m4sj6\" (UID: \"9702070e-6573-4eff-ab20-3d2bad999611\") " pod="kube-system/coredns-668d6bf9bc-m4sj6" May 17 10:00:29.617907 kubelet[2643]: I0517 10:00:29.617870 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ba0ed6c2-f523-43ac-b7eb-41400a29d7f2-config-volume\") pod \"coredns-668d6bf9bc-kkv7k\" (UID: \"ba0ed6c2-f523-43ac-b7eb-41400a29d7f2\") " pod="kube-system/coredns-668d6bf9bc-kkv7k" May 17 10:00:29.893549 containerd[1530]: time="2025-05-17T10:00:29.893413412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m4sj6,Uid:9702070e-6573-4eff-ab20-3d2bad999611,Namespace:kube-system,Attempt:0,}" May 17 10:00:29.896181 containerd[1530]: time="2025-05-17T10:00:29.896072511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kkv7k,Uid:ba0ed6c2-f523-43ac-b7eb-41400a29d7f2,Namespace:kube-system,Attempt:0,}" May 17 10:00:30.237575 kubelet[2643]: I0517 10:00:30.237421 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2gkg8" podStartSLOduration=6.785152245 podStartE2EDuration="15.237396795s" podCreationTimestamp="2025-05-17 10:00:15 +0000 UTC" firstStartedPulling="2025-05-17 10:00:16.187896876 +0000 UTC m=+6.191030936" lastFinishedPulling="2025-05-17 10:00:24.640141426 +0000 UTC m=+14.643275486" observedRunningTime="2025-05-17 10:00:30.235816512 +0000 UTC m=+20.238950572" watchObservedRunningTime="2025-05-17 10:00:30.237396795 +0000 UTC m=+20.240530855" May 17 10:00:31.492615 systemd-networkd[1440]: cilium_host: Link UP May 17 10:00:31.492730 systemd-networkd[1440]: cilium_net: Link UP May 17 10:00:31.493078 systemd-networkd[1440]: cilium_net: Gained carrier May 17 10:00:31.494090 systemd-networkd[1440]: cilium_host: Gained carrier May 17 10:00:31.574459 systemd-networkd[1440]: cilium_vxlan: Link UP May 17 10:00:31.574466 systemd-networkd[1440]: cilium_vxlan: Gained carrier May 17 10:00:31.847653 systemd-networkd[1440]: cilium_net: Gained IPv6LL May 17 10:00:31.892536 kernel: NET: Registered PF_ALG protocol family May 17 10:00:31.967675 systemd-networkd[1440]: cilium_host: Gained IPv6LL May 17 10:00:32.457873 systemd-networkd[1440]: lxc_health: Link UP May 17 10:00:32.460152 systemd-networkd[1440]: lxc_health: Gained carrier May 17 10:00:33.028527 kernel: eth0: renamed from tmp61c85 May 17 10:00:33.029825 systemd-networkd[1440]: lxcec73ad604a91: Link UP May 17 10:00:33.037023 systemd-networkd[1440]: lxc91e83c7437fd: Link UP May 17 10:00:33.038926 kernel: eth0: renamed from tmp2e281 May 17 10:00:33.040199 systemd-networkd[1440]: tmp2e281: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 10:00:33.040271 systemd-networkd[1440]: tmp2e281: Cannot enable IPv6, ignoring: No such file or directory May 17 10:00:33.040282 systemd-networkd[1440]: tmp2e281: Cannot configure IPv6 privacy extensions for interface, ignoring: No such file or directory May 17 10:00:33.040292 systemd-networkd[1440]: tmp2e281: Cannot disable kernel IPv6 accept_ra for interface, ignoring: No such file or directory May 17 10:00:33.040301 systemd-networkd[1440]: tmp2e281: Cannot set IPv6 proxy NDP, ignoring: No such file or directory May 17 10:00:33.040313 systemd-networkd[1440]: tmp2e281: Cannot enable promote_secondaries for interface, ignoring: No such file or directory May 17 10:00:33.042114 systemd-networkd[1440]: lxcec73ad604a91: Gained carrier May 17 10:00:33.043078 systemd-networkd[1440]: lxc91e83c7437fd: Gained carrier May 17 10:00:33.399663 systemd-networkd[1440]: cilium_vxlan: Gained IPv6LL May 17 10:00:34.487702 systemd-networkd[1440]: lxc_health: Gained IPv6LL May 17 10:00:34.487957 systemd-networkd[1440]: lxcec73ad604a91: Gained IPv6LL May 17 10:00:34.615750 systemd-networkd[1440]: lxc91e83c7437fd: Gained IPv6LL May 17 10:00:36.666565 containerd[1530]: time="2025-05-17T10:00:36.666478492Z" level=info msg="connecting to shim 61c85062bf8e95c7ce80a6f8273927b94486564a87bd94f4e11a3274bb143405" address="unix:///run/containerd/s/bb82e552bb6b33eef7ef88a2b150b53c5b329aac4a1b5454b43bcbf174e8a441" namespace=k8s.io protocol=ttrpc version=3 May 17 10:00:36.666925 containerd[1530]: time="2025-05-17T10:00:36.666867201Z" level=info msg="connecting to shim 2e2810e32e2a6aacd09c7ccc27f5aa95d457da150c184b4fb7d54de50826b963" address="unix:///run/containerd/s/733b76011c49d114390f263c76ffe83dfad09b3d3c8310d68b5c52cdcef913ac" namespace=k8s.io protocol=ttrpc version=3 May 17 10:00:36.691680 systemd[1]: Started cri-containerd-2e2810e32e2a6aacd09c7ccc27f5aa95d457da150c184b4fb7d54de50826b963.scope - libcontainer container 2e2810e32e2a6aacd09c7ccc27f5aa95d457da150c184b4fb7d54de50826b963. May 17 10:00:36.693042 systemd[1]: Started cri-containerd-61c85062bf8e95c7ce80a6f8273927b94486564a87bd94f4e11a3274bb143405.scope - libcontainer container 61c85062bf8e95c7ce80a6f8273927b94486564a87bd94f4e11a3274bb143405. May 17 10:00:36.705595 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 10:00:36.706574 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 10:00:36.732557 containerd[1530]: time="2025-05-17T10:00:36.732454302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kkv7k,Uid:ba0ed6c2-f523-43ac-b7eb-41400a29d7f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"61c85062bf8e95c7ce80a6f8273927b94486564a87bd94f4e11a3274bb143405\"" May 17 10:00:36.734304 containerd[1530]: time="2025-05-17T10:00:36.734276240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m4sj6,Uid:9702070e-6573-4eff-ab20-3d2bad999611,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e2810e32e2a6aacd09c7ccc27f5aa95d457da150c184b4fb7d54de50826b963\"" May 17 10:00:36.745950 containerd[1530]: time="2025-05-17T10:00:36.745909259Z" level=info msg="CreateContainer within sandbox \"61c85062bf8e95c7ce80a6f8273927b94486564a87bd94f4e11a3274bb143405\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 10:00:36.746732 containerd[1530]: time="2025-05-17T10:00:36.746711086Z" level=info msg="CreateContainer within sandbox \"2e2810e32e2a6aacd09c7ccc27f5aa95d457da150c184b4fb7d54de50826b963\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 10:00:36.759535 containerd[1530]: time="2025-05-17T10:00:36.759305674Z" level=info msg="Container df9df2c25c5bfd9d849379f26bf37a3a90e4aefa13a0144bca44cae16626a052: CDI devices from CRI Config.CDIDevices: []" May 17 10:00:36.761900 containerd[1530]: time="2025-05-17T10:00:36.761848569Z" level=info msg="Container d4c1a9a7a4c1cb657c09d3c8c1d2bd1817037d72a84da7d517cd8c148e9b8acd: CDI devices from CRI Config.CDIDevices: []" May 17 10:00:36.764526 containerd[1530]: time="2025-05-17T10:00:36.764474375Z" level=info msg="CreateContainer within sandbox \"2e2810e32e2a6aacd09c7ccc27f5aa95d457da150c184b4fb7d54de50826b963\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"df9df2c25c5bfd9d849379f26bf37a3a90e4aefa13a0144bca44cae16626a052\"" May 17 10:00:36.765713 containerd[1530]: time="2025-05-17T10:00:36.765685480Z" level=info msg="StartContainer for \"df9df2c25c5bfd9d849379f26bf37a3a90e4aefa13a0144bca44cae16626a052\"" May 17 10:00:36.766477 containerd[1530]: time="2025-05-17T10:00:36.766445691Z" level=info msg="connecting to shim df9df2c25c5bfd9d849379f26bf37a3a90e4aefa13a0144bca44cae16626a052" address="unix:///run/containerd/s/733b76011c49d114390f263c76ffe83dfad09b3d3c8310d68b5c52cdcef913ac" protocol=ttrpc version=3 May 17 10:00:36.767217 containerd[1530]: time="2025-05-17T10:00:36.767183214Z" level=info msg="CreateContainer within sandbox \"61c85062bf8e95c7ce80a6f8273927b94486564a87bd94f4e11a3274bb143405\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d4c1a9a7a4c1cb657c09d3c8c1d2bd1817037d72a84da7d517cd8c148e9b8acd\"" May 17 10:00:36.767829 containerd[1530]: time="2025-05-17T10:00:36.767797889Z" level=info msg="StartContainer for \"d4c1a9a7a4c1cb657c09d3c8c1d2bd1817037d72a84da7d517cd8c148e9b8acd\"" May 17 10:00:36.768974 containerd[1530]: time="2025-05-17T10:00:36.768907555Z" level=info msg="connecting to shim d4c1a9a7a4c1cb657c09d3c8c1d2bd1817037d72a84da7d517cd8c148e9b8acd" address="unix:///run/containerd/s/bb82e552bb6b33eef7ef88a2b150b53c5b329aac4a1b5454b43bcbf174e8a441" protocol=ttrpc version=3 May 17 10:00:36.791661 systemd[1]: Started cri-containerd-df9df2c25c5bfd9d849379f26bf37a3a90e4aefa13a0144bca44cae16626a052.scope - libcontainer container df9df2c25c5bfd9d849379f26bf37a3a90e4aefa13a0144bca44cae16626a052. May 17 10:00:36.794037 systemd[1]: Started cri-containerd-d4c1a9a7a4c1cb657c09d3c8c1d2bd1817037d72a84da7d517cd8c148e9b8acd.scope - libcontainer container d4c1a9a7a4c1cb657c09d3c8c1d2bd1817037d72a84da7d517cd8c148e9b8acd. May 17 10:00:36.841439 containerd[1530]: time="2025-05-17T10:00:36.841388418Z" level=info msg="StartContainer for \"d4c1a9a7a4c1cb657c09d3c8c1d2bd1817037d72a84da7d517cd8c148e9b8acd\" returns successfully" May 17 10:00:36.850324 containerd[1530]: time="2025-05-17T10:00:36.850217282Z" level=info msg="StartContainer for \"df9df2c25c5bfd9d849379f26bf37a3a90e4aefa13a0144bca44cae16626a052\" returns successfully" May 17 10:00:37.247590 kubelet[2643]: I0517 10:00:37.247284 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-m4sj6" podStartSLOduration=22.247264535 podStartE2EDuration="22.247264535s" podCreationTimestamp="2025-05-17 10:00:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 10:00:37.247083308 +0000 UTC m=+27.250217368" watchObservedRunningTime="2025-05-17 10:00:37.247264535 +0000 UTC m=+27.250398595" May 17 10:00:37.285809 kubelet[2643]: I0517 10:00:37.285724 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-kkv7k" podStartSLOduration=22.285676906 podStartE2EDuration="22.285676906s" podCreationTimestamp="2025-05-17 10:00:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 10:00:37.285477553 +0000 UTC m=+27.288611613" watchObservedRunningTime="2025-05-17 10:00:37.285676906 +0000 UTC m=+27.288811006" May 17 10:00:37.652868 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1643016065.mount: Deactivated successfully. May 17 10:00:39.150871 systemd[1]: Started sshd@7-10.0.0.72:22-10.0.0.1:42172.service - OpenSSH per-connection server daemon (10.0.0.1:42172). May 17 10:00:39.209891 sshd[3985]: Accepted publickey for core from 10.0.0.1 port 42172 ssh2: RSA SHA256:xWUFGIGJGo+HJme0dpHyBaxVmN4GTw4PLZEwYhuGsaQ May 17 10:00:39.211274 sshd-session[3985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 10:00:39.215544 systemd-logind[1516]: New session 8 of user core. May 17 10:00:39.221633 systemd[1]: Started session-8.scope - Session 8 of User core. May 17 10:00:39.342522 sshd[3987]: Connection closed by 10.0.0.1 port 42172 May 17 10:00:39.342823 sshd-session[3985]: pam_unix(sshd:session): session closed for user core May 17 10:00:39.346037 systemd[1]: sshd@7-10.0.0.72:22-10.0.0.1:42172.service: Deactivated successfully. May 17 10:00:39.347696 systemd[1]: session-8.scope: Deactivated successfully. May 17 10:00:39.348371 systemd-logind[1516]: Session 8 logged out. Waiting for processes to exit. May 17 10:00:39.349689 systemd-logind[1516]: Removed session 8. May 17 10:00:44.358983 systemd[1]: Started sshd@8-10.0.0.72:22-10.0.0.1:36406.service - OpenSSH per-connection server daemon (10.0.0.1:36406). May 17 10:00:44.418876 sshd[4003]: Accepted publickey for core from 10.0.0.1 port 36406 ssh2: RSA SHA256:xWUFGIGJGo+HJme0dpHyBaxVmN4GTw4PLZEwYhuGsaQ May 17 10:00:44.420281 sshd-session[4003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 10:00:44.424139 systemd-logind[1516]: New session 9 of user core. May 17 10:00:44.433752 systemd[1]: Started session-9.scope - Session 9 of User core. May 17 10:00:44.550541 sshd[4005]: Connection closed by 10.0.0.1 port 36406 May 17 10:00:44.551027 sshd-session[4003]: pam_unix(sshd:session): session closed for user core May 17 10:00:44.554577 systemd[1]: sshd@8-10.0.0.72:22-10.0.0.1:36406.service: Deactivated successfully. May 17 10:00:44.556156 systemd[1]: session-9.scope: Deactivated successfully. May 17 10:00:44.557314 systemd-logind[1516]: Session 9 logged out. Waiting for processes to exit. May 17 10:00:44.558731 systemd-logind[1516]: Removed session 9. May 17 10:00:49.563811 systemd[1]: Started sshd@9-10.0.0.72:22-10.0.0.1:36420.service - OpenSSH per-connection server daemon (10.0.0.1:36420). May 17 10:00:49.635111 sshd[4025]: Accepted publickey for core from 10.0.0.1 port 36420 ssh2: RSA SHA256:xWUFGIGJGo+HJme0dpHyBaxVmN4GTw4PLZEwYhuGsaQ May 17 10:00:49.636268 sshd-session[4025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 10:00:49.640348 systemd-logind[1516]: New session 10 of user core. May 17 10:00:49.650648 systemd[1]: Started session-10.scope - Session 10 of User core. May 17 10:00:49.765503 sshd[4027]: Connection closed by 10.0.0.1 port 36420 May 17 10:00:49.763476 sshd-session[4025]: pam_unix(sshd:session): session closed for user core May 17 10:00:49.766803 systemd[1]: sshd@9-10.0.0.72:22-10.0.0.1:36420.service: Deactivated successfully. May 17 10:00:49.768851 systemd[1]: session-10.scope: Deactivated successfully. May 17 10:00:49.770051 systemd-logind[1516]: Session 10 logged out. Waiting for processes to exit. May 17 10:00:49.772206 systemd-logind[1516]: Removed session 10. May 17 10:00:54.780637 systemd[1]: Started sshd@10-10.0.0.72:22-10.0.0.1:36324.service - OpenSSH per-connection server daemon (10.0.0.1:36324). May 17 10:00:54.824817 sshd[4042]: Accepted publickey for core from 10.0.0.1 port 36324 ssh2: RSA SHA256:xWUFGIGJGo+HJme0dpHyBaxVmN4GTw4PLZEwYhuGsaQ May 17 10:00:54.826091 sshd-session[4042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 10:00:54.830625 systemd-logind[1516]: New session 11 of user core. May 17 10:00:54.841778 systemd[1]: Started session-11.scope - Session 11 of User core. May 17 10:00:54.953027 sshd[4044]: Connection closed by 10.0.0.1 port 36324 May 17 10:00:54.953533 sshd-session[4042]: pam_unix(sshd:session): session closed for user core May 17 10:00:54.964614 systemd[1]: sshd@10-10.0.0.72:22-10.0.0.1:36324.service: Deactivated successfully. May 17 10:00:54.967870 systemd[1]: session-11.scope: Deactivated successfully. May 17 10:00:54.968898 systemd-logind[1516]: Session 11 logged out. Waiting for processes to exit. May 17 10:00:54.971754 systemd-logind[1516]: Removed session 11. May 17 10:00:54.974697 systemd[1]: Started sshd@11-10.0.0.72:22-10.0.0.1:36338.service - OpenSSH per-connection server daemon (10.0.0.1:36338). May 17 10:00:55.032601 sshd[4058]: Accepted publickey for core from 10.0.0.1 port 36338 ssh2: RSA SHA256:xWUFGIGJGo+HJme0dpHyBaxVmN4GTw4PLZEwYhuGsaQ May 17 10:00:55.033612 sshd-session[4058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 10:00:55.037593 systemd-logind[1516]: New session 12 of user core. May 17 10:00:55.059670 systemd[1]: Started session-12.scope - Session 12 of User core. May 17 10:00:55.208021 sshd[4060]: Connection closed by 10.0.0.1 port 36338 May 17 10:00:55.208347 sshd-session[4058]: pam_unix(sshd:session): session closed for user core May 17 10:00:55.219456 systemd[1]: sshd@11-10.0.0.72:22-10.0.0.1:36338.service: Deactivated successfully. May 17 10:00:55.225608 systemd[1]: session-12.scope: Deactivated successfully. May 17 10:00:55.227814 systemd-logind[1516]: Session 12 logged out. Waiting for processes to exit. May 17 10:00:55.230850 systemd-logind[1516]: Removed session 12. May 17 10:00:55.233384 systemd[1]: Started sshd@12-10.0.0.72:22-10.0.0.1:36342.service - OpenSSH per-connection server daemon (10.0.0.1:36342). May 17 10:00:55.299146 sshd[4071]: Accepted publickey for core from 10.0.0.1 port 36342 ssh2: RSA SHA256:xWUFGIGJGo+HJme0dpHyBaxVmN4GTw4PLZEwYhuGsaQ May 17 10:00:55.300114 sshd-session[4071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 10:00:55.304044 systemd-logind[1516]: New session 13 of user core. May 17 10:00:55.314920 systemd[1]: Started session-13.scope - Session 13 of User core. May 17 10:00:55.436549 sshd[4073]: Connection closed by 10.0.0.1 port 36342 May 17 10:00:55.437209 sshd-session[4071]: pam_unix(sshd:session): session closed for user core May 17 10:00:55.440591 systemd[1]: sshd@12-10.0.0.72:22-10.0.0.1:36342.service: Deactivated successfully. May 17 10:00:55.442607 systemd[1]: session-13.scope: Deactivated successfully. May 17 10:00:55.443396 systemd-logind[1516]: Session 13 logged out. Waiting for processes to exit. May 17 10:00:55.444627 systemd-logind[1516]: Removed session 13. May 17 10:01:00.458123 systemd[1]: Started sshd@13-10.0.0.72:22-10.0.0.1:36344.service - OpenSSH per-connection server daemon (10.0.0.1:36344). May 17 10:01:00.502828 sshd[4087]: Accepted publickey for core from 10.0.0.1 port 36344 ssh2: RSA SHA256:xWUFGIGJGo+HJme0dpHyBaxVmN4GTw4PLZEwYhuGsaQ May 17 10:01:00.504106 sshd-session[4087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 10:01:00.508543 systemd-logind[1516]: New session 14 of user core. May 17 10:01:00.522685 systemd[1]: Started session-14.scope - Session 14 of User core. May 17 10:01:00.637326 sshd[4089]: Connection closed by 10.0.0.1 port 36344 May 17 10:01:00.638019 sshd-session[4087]: pam_unix(sshd:session): session closed for user core May 17 10:01:00.641884 systemd[1]: sshd@13-10.0.0.72:22-10.0.0.1:36344.service: Deactivated successfully. May 17 10:01:00.647623 systemd[1]: session-14.scope: Deactivated successfully. May 17 10:01:00.649392 systemd-logind[1516]: Session 14 logged out. Waiting for processes to exit. May 17 10:01:00.651464 systemd-logind[1516]: Removed session 14. May 17 10:01:05.653708 systemd[1]: Started sshd@14-10.0.0.72:22-10.0.0.1:40948.service - OpenSSH per-connection server daemon (10.0.0.1:40948). May 17 10:01:05.697586 sshd[4102]: Accepted publickey for core from 10.0.0.1 port 40948 ssh2: RSA SHA256:xWUFGIGJGo+HJme0dpHyBaxVmN4GTw4PLZEwYhuGsaQ May 17 10:01:05.698739 sshd-session[4102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 10:01:05.702964 systemd-logind[1516]: New session 15 of user core. May 17 10:01:05.717623 systemd[1]: Started session-15.scope - Session 15 of User core. May 17 10:01:05.828842 sshd[4104]: Connection closed by 10.0.0.1 port 40948 May 17 10:01:05.829200 sshd-session[4102]: pam_unix(sshd:session): session closed for user core May 17 10:01:05.841547 systemd[1]: sshd@14-10.0.0.72:22-10.0.0.1:40948.service: Deactivated successfully. May 17 10:01:05.843378 systemd[1]: session-15.scope: Deactivated successfully. May 17 10:01:05.844204 systemd-logind[1516]: Session 15 logged out. Waiting for processes to exit. May 17 10:01:05.847268 systemd[1]: Started sshd@15-10.0.0.72:22-10.0.0.1:40964.service - OpenSSH per-connection server daemon (10.0.0.1:40964). May 17 10:01:05.848690 systemd-logind[1516]: Removed session 15. May 17 10:01:05.915115 sshd[4117]: Accepted publickey for core from 10.0.0.1 port 40964 ssh2: RSA SHA256:xWUFGIGJGo+HJme0dpHyBaxVmN4GTw4PLZEwYhuGsaQ May 17 10:01:05.916242 sshd-session[4117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 10:01:05.920819 systemd-logind[1516]: New session 16 of user core. May 17 10:01:05.930636 systemd[1]: Started session-16.scope - Session 16 of User core. May 17 10:01:06.110832 sshd[4120]: Connection closed by 10.0.0.1 port 40964 May 17 10:01:06.111543 sshd-session[4117]: pam_unix(sshd:session): session closed for user core May 17 10:01:06.125482 systemd[1]: sshd@15-10.0.0.72:22-10.0.0.1:40964.service: Deactivated successfully. May 17 10:01:06.127649 systemd[1]: session-16.scope: Deactivated successfully. May 17 10:01:06.128546 systemd-logind[1516]: Session 16 logged out. Waiting for processes to exit. May 17 10:01:06.130961 systemd-logind[1516]: Removed session 16. May 17 10:01:06.132895 systemd[1]: Started sshd@16-10.0.0.72:22-10.0.0.1:40980.service - OpenSSH per-connection server daemon (10.0.0.1:40980). May 17 10:01:06.194634 sshd[4131]: Accepted publickey for core from 10.0.0.1 port 40980 ssh2: RSA SHA256:xWUFGIGJGo+HJme0dpHyBaxVmN4GTw4PLZEwYhuGsaQ May 17 10:01:06.195651 sshd-session[4131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 10:01:06.199564 systemd-logind[1516]: New session 17 of user core. May 17 10:01:06.210643 systemd[1]: Started session-17.scope - Session 17 of User core. May 17 10:01:06.868154 sshd[4133]: Connection closed by 10.0.0.1 port 40980 May 17 10:01:06.868703 sshd-session[4131]: pam_unix(sshd:session): session closed for user core May 17 10:01:06.882081 systemd[1]: sshd@16-10.0.0.72:22-10.0.0.1:40980.service: Deactivated successfully. May 17 10:01:06.885393 systemd[1]: session-17.scope: Deactivated successfully. May 17 10:01:06.886827 systemd-logind[1516]: Session 17 logged out. Waiting for processes to exit. May 17 10:01:06.891616 systemd[1]: Started sshd@17-10.0.0.72:22-10.0.0.1:40984.service - OpenSSH per-connection server daemon (10.0.0.1:40984). May 17 10:01:06.892540 systemd-logind[1516]: Removed session 17. May 17 10:01:06.950201 sshd[4152]: Accepted publickey for core from 10.0.0.1 port 40984 ssh2: RSA SHA256:xWUFGIGJGo+HJme0dpHyBaxVmN4GTw4PLZEwYhuGsaQ May 17 10:01:06.951726 sshd-session[4152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 10:01:06.955373 systemd-logind[1516]: New session 18 of user core. May 17 10:01:06.974636 systemd[1]: Started session-18.scope - Session 18 of User core. May 17 10:01:07.183786 sshd[4154]: Connection closed by 10.0.0.1 port 40984 May 17 10:01:07.187636 sshd-session[4152]: pam_unix(sshd:session): session closed for user core May 17 10:01:07.195309 systemd[1]: sshd@17-10.0.0.72:22-10.0.0.1:40984.service: Deactivated successfully. May 17 10:01:07.197176 systemd[1]: session-18.scope: Deactivated successfully. May 17 10:01:07.198051 systemd-logind[1516]: Session 18 logged out. Waiting for processes to exit. May 17 10:01:07.201316 systemd[1]: Started sshd@18-10.0.0.72:22-10.0.0.1:40994.service - OpenSSH per-connection server daemon (10.0.0.1:40994). May 17 10:01:07.202008 systemd-logind[1516]: Removed session 18. May 17 10:01:07.259002 sshd[4166]: Accepted publickey for core from 10.0.0.1 port 40994 ssh2: RSA SHA256:xWUFGIGJGo+HJme0dpHyBaxVmN4GTw4PLZEwYhuGsaQ May 17 10:01:07.261092 sshd-session[4166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 10:01:07.267725 systemd-logind[1516]: New session 19 of user core. May 17 10:01:07.275666 systemd[1]: Started session-19.scope - Session 19 of User core. May 17 10:01:07.384279 sshd[4168]: Connection closed by 10.0.0.1 port 40994 May 17 10:01:07.384816 sshd-session[4166]: pam_unix(sshd:session): session closed for user core May 17 10:01:07.387596 systemd[1]: sshd@18-10.0.0.72:22-10.0.0.1:40994.service: Deactivated successfully. May 17 10:01:07.389155 systemd[1]: session-19.scope: Deactivated successfully. May 17 10:01:07.390411 systemd-logind[1516]: Session 19 logged out. Waiting for processes to exit. May 17 10:01:07.391738 systemd-logind[1516]: Removed session 19. May 17 10:01:12.408725 systemd[1]: Started sshd@19-10.0.0.72:22-10.0.0.1:40996.service - OpenSSH per-connection server daemon (10.0.0.1:40996). May 17 10:01:12.463957 sshd[4187]: Accepted publickey for core from 10.0.0.1 port 40996 ssh2: RSA SHA256:xWUFGIGJGo+HJme0dpHyBaxVmN4GTw4PLZEwYhuGsaQ May 17 10:01:12.465215 sshd-session[4187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 10:01:12.469285 systemd-logind[1516]: New session 20 of user core. May 17 10:01:12.478642 systemd[1]: Started session-20.scope - Session 20 of User core. May 17 10:01:12.585919 sshd[4189]: Connection closed by 10.0.0.1 port 40996 May 17 10:01:12.586261 sshd-session[4187]: pam_unix(sshd:session): session closed for user core May 17 10:01:12.589694 systemd[1]: sshd@19-10.0.0.72:22-10.0.0.1:40996.service: Deactivated successfully. May 17 10:01:12.591275 systemd[1]: session-20.scope: Deactivated successfully. May 17 10:01:12.593119 systemd-logind[1516]: Session 20 logged out. Waiting for processes to exit. May 17 10:01:12.594404 systemd-logind[1516]: Removed session 20. May 17 10:01:17.599647 systemd[1]: Started sshd@20-10.0.0.72:22-10.0.0.1:45314.service - OpenSSH per-connection server daemon (10.0.0.1:45314). May 17 10:01:17.644323 sshd[4204]: Accepted publickey for core from 10.0.0.1 port 45314 ssh2: RSA SHA256:xWUFGIGJGo+HJme0dpHyBaxVmN4GTw4PLZEwYhuGsaQ May 17 10:01:17.644785 sshd-session[4204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 10:01:17.649502 systemd-logind[1516]: New session 21 of user core. May 17 10:01:17.659643 systemd[1]: Started session-21.scope - Session 21 of User core. May 17 10:01:17.771337 sshd[4206]: Connection closed by 10.0.0.1 port 45314 May 17 10:01:17.771845 sshd-session[4204]: pam_unix(sshd:session): session closed for user core May 17 10:01:17.775107 systemd[1]: sshd@20-10.0.0.72:22-10.0.0.1:45314.service: Deactivated successfully. May 17 10:01:17.777798 systemd[1]: session-21.scope: Deactivated successfully. May 17 10:01:17.778754 systemd-logind[1516]: Session 21 logged out. Waiting for processes to exit. May 17 10:01:17.780249 systemd-logind[1516]: Removed session 21. May 17 10:01:22.790486 systemd[1]: Started sshd@21-10.0.0.72:22-10.0.0.1:36860.service - OpenSSH per-connection server daemon (10.0.0.1:36860). May 17 10:01:22.853538 sshd[4220]: Accepted publickey for core from 10.0.0.1 port 36860 ssh2: RSA SHA256:xWUFGIGJGo+HJme0dpHyBaxVmN4GTw4PLZEwYhuGsaQ May 17 10:01:22.854693 sshd-session[4220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 10:01:22.858772 systemd-logind[1516]: New session 22 of user core. May 17 10:01:22.871640 systemd[1]: Started session-22.scope - Session 22 of User core. May 17 10:01:22.976731 sshd[4222]: Connection closed by 10.0.0.1 port 36860 May 17 10:01:22.977059 sshd-session[4220]: pam_unix(sshd:session): session closed for user core May 17 10:01:22.989321 systemd[1]: sshd@21-10.0.0.72:22-10.0.0.1:36860.service: Deactivated successfully. May 17 10:01:22.991147 systemd[1]: session-22.scope: Deactivated successfully. May 17 10:01:22.992198 systemd-logind[1516]: Session 22 logged out. Waiting for processes to exit. May 17 10:01:22.994261 systemd-logind[1516]: Removed session 22. May 17 10:01:22.996713 systemd[1]: Started sshd@22-10.0.0.72:22-10.0.0.1:36876.service - OpenSSH per-connection server daemon (10.0.0.1:36876). May 17 10:01:23.056987 sshd[4236]: Accepted publickey for core from 10.0.0.1 port 36876 ssh2: RSA SHA256:xWUFGIGJGo+HJme0dpHyBaxVmN4GTw4PLZEwYhuGsaQ May 17 10:01:23.058162 sshd-session[4236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 10:01:23.063820 systemd-logind[1516]: New session 23 of user core. May 17 10:01:23.074652 systemd[1]: Started session-23.scope - Session 23 of User core. May 17 10:01:24.098858 kubelet[2643]: E0517 10:01:24.098762 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:01:24.675569 containerd[1530]: time="2025-05-17T10:01:24.675514858Z" level=info msg="StopContainer for \"97083f3ba7d92c6dd98c87c8954c151ad9ccbc4e60c09fe88ef67d0792688d20\" with timeout 30 (s)" May 17 10:01:24.676588 containerd[1530]: time="2025-05-17T10:01:24.675936408Z" level=info msg="Stop container \"97083f3ba7d92c6dd98c87c8954c151ad9ccbc4e60c09fe88ef67d0792688d20\" with signal terminated" May 17 10:01:24.698346 containerd[1530]: time="2025-05-17T10:01:24.698301234Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 10:01:24.699084 systemd[1]: cri-containerd-97083f3ba7d92c6dd98c87c8954c151ad9ccbc4e60c09fe88ef67d0792688d20.scope: Deactivated successfully. May 17 10:01:24.701789 containerd[1530]: time="2025-05-17T10:01:24.700737109Z" level=info msg="TaskExit event in podsandbox handler container_id:\"97083f3ba7d92c6dd98c87c8954c151ad9ccbc4e60c09fe88ef67d0792688d20\" id:\"97083f3ba7d92c6dd98c87c8954c151ad9ccbc4e60c09fe88ef67d0792688d20\" pid:3173 exited_at:{seconds:1747476084 nanos:700303840}" May 17 10:01:24.702192 containerd[1530]: time="2025-05-17T10:01:24.702072033Z" level=info msg="received exit event container_id:\"97083f3ba7d92c6dd98c87c8954c151ad9ccbc4e60c09fe88ef67d0792688d20\" id:\"97083f3ba7d92c6dd98c87c8954c151ad9ccbc4e60c09fe88ef67d0792688d20\" pid:3173 exited_at:{seconds:1747476084 nanos:700303840}" May 17 10:01:24.706533 containerd[1530]: time="2025-05-17T10:01:24.705741244Z" level=info msg="TaskExit event in podsandbox handler container_id:\"851e5e58609a05b1376b2775237b4fe5500e4ffee367db76120904f0dbaed12d\" id:\"34093d638467bd756b191e4b0b2afda490018c930dcc45a4c8501ff83d90f630\" pid:4264 exited_at:{seconds:1747476084 nanos:705547747}" May 17 10:01:24.707598 containerd[1530]: time="2025-05-17T10:01:24.707567311Z" level=info msg="StopContainer for \"851e5e58609a05b1376b2775237b4fe5500e4ffee367db76120904f0dbaed12d\" with timeout 2 (s)" May 17 10:01:24.707853 containerd[1530]: time="2025-05-17T10:01:24.707831600Z" level=info msg="Stop container \"851e5e58609a05b1376b2775237b4fe5500e4ffee367db76120904f0dbaed12d\" with signal terminated" May 17 10:01:24.715277 systemd-networkd[1440]: lxc_health: Link DOWN May 17 10:01:24.715283 systemd-networkd[1440]: lxc_health: Lost carrier May 17 10:01:24.727578 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-97083f3ba7d92c6dd98c87c8954c151ad9ccbc4e60c09fe88ef67d0792688d20-rootfs.mount: Deactivated successfully. May 17 10:01:24.740607 systemd[1]: cri-containerd-851e5e58609a05b1376b2775237b4fe5500e4ffee367db76120904f0dbaed12d.scope: Deactivated successfully. May 17 10:01:24.742509 containerd[1530]: time="2025-05-17T10:01:24.741054996Z" level=info msg="received exit event container_id:\"851e5e58609a05b1376b2775237b4fe5500e4ffee367db76120904f0dbaed12d\" id:\"851e5e58609a05b1376b2775237b4fe5500e4ffee367db76120904f0dbaed12d\" pid:3301 exited_at:{seconds:1747476084 nanos:739983801}" May 17 10:01:24.742509 containerd[1530]: time="2025-05-17T10:01:24.741059956Z" level=info msg="TaskExit event in podsandbox handler container_id:\"851e5e58609a05b1376b2775237b4fe5500e4ffee367db76120904f0dbaed12d\" id:\"851e5e58609a05b1376b2775237b4fe5500e4ffee367db76120904f0dbaed12d\" pid:3301 exited_at:{seconds:1747476084 nanos:739983801}" May 17 10:01:24.742602 systemd[1]: cri-containerd-851e5e58609a05b1376b2775237b4fe5500e4ffee367db76120904f0dbaed12d.scope: Consumed 6.435s CPU time, 121.4M memory peak, 204K read from disk, 12.9M written to disk. May 17 10:01:24.747374 containerd[1530]: time="2025-05-17T10:01:24.747324863Z" level=info msg="StopContainer for \"97083f3ba7d92c6dd98c87c8954c151ad9ccbc4e60c09fe88ef67d0792688d20\" returns successfully" May 17 10:01:24.759731 containerd[1530]: time="2025-05-17T10:01:24.759554354Z" level=info msg="StopPodSandbox for \"9e040522493d0e010f38d0786c632b05eacee7ec516df5d77968906225a766d6\"" May 17 10:01:24.763882 containerd[1530]: time="2025-05-17T10:01:24.763563365Z" level=info msg="Container to stop \"97083f3ba7d92c6dd98c87c8954c151ad9ccbc4e60c09fe88ef67d0792688d20\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 10:01:24.772934 systemd[1]: cri-containerd-9e040522493d0e010f38d0786c632b05eacee7ec516df5d77968906225a766d6.scope: Deactivated successfully. May 17 10:01:24.774601 containerd[1530]: time="2025-05-17T10:01:24.774561559Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9e040522493d0e010f38d0786c632b05eacee7ec516df5d77968906225a766d6\" id:\"9e040522493d0e010f38d0786c632b05eacee7ec516df5d77968906225a766d6\" pid:2879 exit_status:137 exited_at:{seconds:1747476084 nanos:774146408}" May 17 10:01:24.779606 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-851e5e58609a05b1376b2775237b4fe5500e4ffee367db76120904f0dbaed12d-rootfs.mount: Deactivated successfully. May 17 10:01:24.786908 containerd[1530]: time="2025-05-17T10:01:24.786860482Z" level=info msg="StopContainer for \"851e5e58609a05b1376b2775237b4fe5500e4ffee367db76120904f0dbaed12d\" returns successfully" May 17 10:01:24.787563 containerd[1530]: time="2025-05-17T10:01:24.787532043Z" level=info msg="StopPodSandbox for \"409292bd8d24d9b334e9d431a35ace55a101fb6f39941e6c60befb79ca895697\"" May 17 10:01:24.787624 containerd[1530]: time="2025-05-17T10:01:24.787587637Z" level=info msg="Container to stop \"0631445707bc10ad5f67afd575832bd7f30d5a7de2a360869f9b15232f12b0ca\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 10:01:24.787703 containerd[1530]: time="2025-05-17T10:01:24.787688665Z" level=info msg="Container to stop \"851e5e58609a05b1376b2775237b4fe5500e4ffee367db76120904f0dbaed12d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 10:01:24.787736 containerd[1530]: time="2025-05-17T10:01:24.787702263Z" level=info msg="Container to stop \"62cbf4955f577dee8c9df8b6026c89970239365efb1fe83c2619fa3f1e9ba2cd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 10:01:24.787736 containerd[1530]: time="2025-05-17T10:01:24.787710902Z" level=info msg="Container to stop \"965ba86e9ee419ca8d015f538f8f6a309b415c3ba0531f4733ac9ac93cf200db\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 10:01:24.787736 containerd[1530]: time="2025-05-17T10:01:24.787718421Z" level=info msg="Container to stop \"b1f66cc4cb30c2f4e6e0ca2411874645ffedc2acb0172803cade71df0c6a6619\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 10:01:24.793393 systemd[1]: cri-containerd-409292bd8d24d9b334e9d431a35ace55a101fb6f39941e6c60befb79ca895697.scope: Deactivated successfully. May 17 10:01:24.803979 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e040522493d0e010f38d0786c632b05eacee7ec516df5d77968906225a766d6-rootfs.mount: Deactivated successfully. May 17 10:01:24.808626 containerd[1530]: time="2025-05-17T10:01:24.807835710Z" level=info msg="shim disconnected" id=9e040522493d0e010f38d0786c632b05eacee7ec516df5d77968906225a766d6 namespace=k8s.io May 17 10:01:24.816385 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-409292bd8d24d9b334e9d431a35ace55a101fb6f39941e6c60befb79ca895697-rootfs.mount: Deactivated successfully. May 17 10:01:24.822886 containerd[1530]: time="2025-05-17T10:01:24.807868346Z" level=warning msg="cleaning up after shim disconnected" id=9e040522493d0e010f38d0786c632b05eacee7ec516df5d77968906225a766d6 namespace=k8s.io May 17 10:01:24.822886 containerd[1530]: time="2025-05-17T10:01:24.822878431Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 10:01:24.836695 containerd[1530]: time="2025-05-17T10:01:24.836360215Z" level=info msg="TaskExit event in podsandbox handler container_id:\"409292bd8d24d9b334e9d431a35ace55a101fb6f39941e6c60befb79ca895697\" id:\"409292bd8d24d9b334e9d431a35ace55a101fb6f39941e6c60befb79ca895697\" pid:2795 exit_status:137 exited_at:{seconds:1747476084 nanos:793597414}" May 17 10:01:24.836695 containerd[1530]: time="2025-05-17T10:01:24.836432847Z" level=info msg="received exit event sandbox_id:\"9e040522493d0e010f38d0786c632b05eacee7ec516df5d77968906225a766d6\" exit_status:137 exited_at:{seconds:1747476084 nanos:774146408}" May 17 10:01:24.836992 containerd[1530]: time="2025-05-17T10:01:24.836534875Z" level=info msg="TearDown network for sandbox \"9e040522493d0e010f38d0786c632b05eacee7ec516df5d77968906225a766d6\" successfully" May 17 10:01:24.836992 containerd[1530]: time="2025-05-17T10:01:24.836890753Z" level=info msg="StopPodSandbox for \"9e040522493d0e010f38d0786c632b05eacee7ec516df5d77968906225a766d6\" returns successfully" May 17 10:01:24.838227 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9e040522493d0e010f38d0786c632b05eacee7ec516df5d77968906225a766d6-shm.mount: Deactivated successfully. May 17 10:01:24.850706 containerd[1530]: time="2025-05-17T10:01:24.850635946Z" level=info msg="shim disconnected" id=409292bd8d24d9b334e9d431a35ace55a101fb6f39941e6c60befb79ca895697 namespace=k8s.io May 17 10:01:24.850706 containerd[1530]: time="2025-05-17T10:01:24.850666303Z" level=warning msg="cleaning up after shim disconnected" id=409292bd8d24d9b334e9d431a35ace55a101fb6f39941e6c60befb79ca895697 namespace=k8s.io May 17 10:01:24.850706 containerd[1530]: time="2025-05-17T10:01:24.850694019Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 10:01:24.851294 containerd[1530]: time="2025-05-17T10:01:24.850884437Z" level=info msg="received exit event sandbox_id:\"409292bd8d24d9b334e9d431a35ace55a101fb6f39941e6c60befb79ca895697\" exit_status:137 exited_at:{seconds:1747476084 nanos:793597414}" May 17 10:01:24.851375 containerd[1530]: time="2025-05-17T10:01:24.851330465Z" level=info msg="TearDown network for sandbox \"409292bd8d24d9b334e9d431a35ace55a101fb6f39941e6c60befb79ca895697\" successfully" May 17 10:01:24.851375 containerd[1530]: time="2025-05-17T10:01:24.851360822Z" level=info msg="StopPodSandbox for \"409292bd8d24d9b334e9d431a35ace55a101fb6f39941e6c60befb79ca895697\" returns successfully" May 17 10:01:24.973058 kubelet[2643]: I0517 10:01:24.972937 2643 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1920fe25-22b4-4757-b4c3-9dad28aa1e5b-cilium-run\") pod \"1920fe25-22b4-4757-b4c3-9dad28aa1e5b\" (UID: \"1920fe25-22b4-4757-b4c3-9dad28aa1e5b\") " May 17 10:01:24.973058 kubelet[2643]: I0517 10:01:24.972989 2643 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1920fe25-22b4-4757-b4c3-9dad28aa1e5b-host-proc-sys-net\") pod \"1920fe25-22b4-4757-b4c3-9dad28aa1e5b\" (UID: \"1920fe25-22b4-4757-b4c3-9dad28aa1e5b\") " May 17 10:01:24.973058 kubelet[2643]: I0517 10:01:24.973006 2643 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1920fe25-22b4-4757-b4c3-9dad28aa1e5b-xtables-lock\") pod \"1920fe25-22b4-4757-b4c3-9dad28aa1e5b\" (UID: \"1920fe25-22b4-4757-b4c3-9dad28aa1e5b\") " May 17 10:01:24.973058 kubelet[2643]: I0517 10:01:24.973022 2643 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1920fe25-22b4-4757-b4c3-9dad28aa1e5b-etc-cni-netd\") pod \"1920fe25-22b4-4757-b4c3-9dad28aa1e5b\" (UID: \"1920fe25-22b4-4757-b4c3-9dad28aa1e5b\") " May 17 10:01:24.973058 kubelet[2643]: I0517 10:01:24.973044 2643 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1920fe25-22b4-4757-b4c3-9dad28aa1e5b-clustermesh-secrets\") pod \"1920fe25-22b4-4757-b4c3-9dad28aa1e5b\" (UID: \"1920fe25-22b4-4757-b4c3-9dad28aa1e5b\") " May 17 10:01:24.973317 kubelet[2643]: I0517 10:01:24.973071 2643 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1920fe25-22b4-4757-b4c3-9dad28aa1e5b-host-proc-sys-kernel\") pod \"1920fe25-22b4-4757-b4c3-9dad28aa1e5b\" (UID: \"1920fe25-22b4-4757-b4c3-9dad28aa1e5b\") " May 17 10:01:24.973317 kubelet[2643]: I0517 10:01:24.973091 2643 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f25b87e9-cd56-46cd-a07d-45fb46b3797d-cilium-config-path\") pod \"f25b87e9-cd56-46cd-a07d-45fb46b3797d\" (UID: \"f25b87e9-cd56-46cd-a07d-45fb46b3797d\") " May 17 10:01:24.973317 kubelet[2643]: I0517 10:01:24.973109 2643 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1920fe25-22b4-4757-b4c3-9dad28aa1e5b-hubble-tls\") pod \"1920fe25-22b4-4757-b4c3-9dad28aa1e5b\" (UID: \"1920fe25-22b4-4757-b4c3-9dad28aa1e5b\") " May 17 10:01:24.973317 kubelet[2643]: I0517 10:01:24.973126 2643 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1920fe25-22b4-4757-b4c3-9dad28aa1e5b-cilium-config-path\") pod \"1920fe25-22b4-4757-b4c3-9dad28aa1e5b\" (UID: \"1920fe25-22b4-4757-b4c3-9dad28aa1e5b\") " May 17 10:01:24.973317 kubelet[2643]: I0517 10:01:24.973142 2643 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1920fe25-22b4-4757-b4c3-9dad28aa1e5b-cni-path\") pod \"1920fe25-22b4-4757-b4c3-9dad28aa1e5b\" (UID: \"1920fe25-22b4-4757-b4c3-9dad28aa1e5b\") " May 17 10:01:24.973317 kubelet[2643]: I0517 10:01:24.973158 2643 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzp22\" (UniqueName: \"kubernetes.io/projected/1920fe25-22b4-4757-b4c3-9dad28aa1e5b-kube-api-access-mzp22\") pod \"1920fe25-22b4-4757-b4c3-9dad28aa1e5b\" (UID: \"1920fe25-22b4-4757-b4c3-9dad28aa1e5b\") " May 17 10:01:24.973453 kubelet[2643]: I0517 10:01:24.973175 2643 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1920fe25-22b4-4757-b4c3-9dad28aa1e5b-lib-modules\") pod \"1920fe25-22b4-4757-b4c3-9dad28aa1e5b\" (UID: \"1920fe25-22b4-4757-b4c3-9dad28aa1e5b\") " May 17 10:01:24.973453 kubelet[2643]: I0517 10:01:24.973188 2643 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1920fe25-22b4-4757-b4c3-9dad28aa1e5b-cilium-cgroup\") pod \"1920fe25-22b4-4757-b4c3-9dad28aa1e5b\" (UID: \"1920fe25-22b4-4757-b4c3-9dad28aa1e5b\") " May 17 10:01:24.973453 kubelet[2643]: I0517 10:01:24.973201 2643 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1920fe25-22b4-4757-b4c3-9dad28aa1e5b-hostproc\") pod \"1920fe25-22b4-4757-b4c3-9dad28aa1e5b\" (UID: \"1920fe25-22b4-4757-b4c3-9dad28aa1e5b\") " May 17 10:01:24.973453 kubelet[2643]: I0517 10:01:24.973216 2643 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1920fe25-22b4-4757-b4c3-9dad28aa1e5b-bpf-maps\") pod \"1920fe25-22b4-4757-b4c3-9dad28aa1e5b\" (UID: \"1920fe25-22b4-4757-b4c3-9dad28aa1e5b\") " May 17 10:01:24.973453 kubelet[2643]: I0517 10:01:24.973234 2643 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvmp6\" (UniqueName: \"kubernetes.io/projected/f25b87e9-cd56-46cd-a07d-45fb46b3797d-kube-api-access-jvmp6\") pod \"f25b87e9-cd56-46cd-a07d-45fb46b3797d\" (UID: \"f25b87e9-cd56-46cd-a07d-45fb46b3797d\") " May 17 10:01:24.975525 kubelet[2643]: I0517 10:01:24.974617 2643 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1920fe25-22b4-4757-b4c3-9dad28aa1e5b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1920fe25-22b4-4757-b4c3-9dad28aa1e5b" (UID: "1920fe25-22b4-4757-b4c3-9dad28aa1e5b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 10:01:24.975525 kubelet[2643]: I0517 10:01:24.974674 2643 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1920fe25-22b4-4757-b4c3-9dad28aa1e5b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1920fe25-22b4-4757-b4c3-9dad28aa1e5b" (UID: "1920fe25-22b4-4757-b4c3-9dad28aa1e5b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 10:01:24.975525 kubelet[2643]: I0517 10:01:24.974690 2643 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1920fe25-22b4-4757-b4c3-9dad28aa1e5b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1920fe25-22b4-4757-b4c3-9dad28aa1e5b" (UID: "1920fe25-22b4-4757-b4c3-9dad28aa1e5b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 10:01:24.975525 kubelet[2643]: I0517 10:01:24.974974 2643 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1920fe25-22b4-4757-b4c3-9dad28aa1e5b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1920fe25-22b4-4757-b4c3-9dad28aa1e5b" (UID: "1920fe25-22b4-4757-b4c3-9dad28aa1e5b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 10:01:24.975525 kubelet[2643]: I0517 10:01:24.975000 2643 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1920fe25-22b4-4757-b4c3-9dad28aa1e5b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1920fe25-22b4-4757-b4c3-9dad28aa1e5b" (UID: "1920fe25-22b4-4757-b4c3-9dad28aa1e5b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 10:01:24.975832 kubelet[2643]: I0517 10:01:24.975811 2643 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1920fe25-22b4-4757-b4c3-9dad28aa1e5b-cni-path" (OuterVolumeSpecName: "cni-path") pod "1920fe25-22b4-4757-b4c3-9dad28aa1e5b" (UID: "1920fe25-22b4-4757-b4c3-9dad28aa1e5b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 10:01:24.976722 kubelet[2643]: I0517 10:01:24.976699 2643 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1920fe25-22b4-4757-b4c3-9dad28aa1e5b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1920fe25-22b4-4757-b4c3-9dad28aa1e5b" (UID: "1920fe25-22b4-4757-b4c3-9dad28aa1e5b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 10:01:24.976835 kubelet[2643]: I0517 10:01:24.976754 2643 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f25b87e9-cd56-46cd-a07d-45fb46b3797d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f25b87e9-cd56-46cd-a07d-45fb46b3797d" (UID: "f25b87e9-cd56-46cd-a07d-45fb46b3797d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 17 10:01:24.976920 kubelet[2643]: I0517 10:01:24.976907 2643 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1920fe25-22b4-4757-b4c3-9dad28aa1e5b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1920fe25-22b4-4757-b4c3-9dad28aa1e5b" (UID: "1920fe25-22b4-4757-b4c3-9dad28aa1e5b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 10:01:24.976998 kubelet[2643]: I0517 10:01:24.976986 2643 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1920fe25-22b4-4757-b4c3-9dad28aa1e5b-hostproc" (OuterVolumeSpecName: "hostproc") pod "1920fe25-22b4-4757-b4c3-9dad28aa1e5b" (UID: "1920fe25-22b4-4757-b4c3-9dad28aa1e5b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 10:01:24.977070 kubelet[2643]: I0517 10:01:24.977060 2643 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1920fe25-22b4-4757-b4c3-9dad28aa1e5b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1920fe25-22b4-4757-b4c3-9dad28aa1e5b" (UID: "1920fe25-22b4-4757-b4c3-9dad28aa1e5b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 10:01:24.977797 kubelet[2643]: I0517 10:01:24.977764 2643 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f25b87e9-cd56-46cd-a07d-45fb46b3797d-kube-api-access-jvmp6" (OuterVolumeSpecName: "kube-api-access-jvmp6") pod "f25b87e9-cd56-46cd-a07d-45fb46b3797d" (UID: "f25b87e9-cd56-46cd-a07d-45fb46b3797d"). InnerVolumeSpecName "kube-api-access-jvmp6". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 10:01:24.978024 kubelet[2643]: I0517 10:01:24.977999 2643 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1920fe25-22b4-4757-b4c3-9dad28aa1e5b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1920fe25-22b4-4757-b4c3-9dad28aa1e5b" (UID: "1920fe25-22b4-4757-b4c3-9dad28aa1e5b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 17 10:01:24.978220 kubelet[2643]: I0517 10:01:24.978198 2643 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1920fe25-22b4-4757-b4c3-9dad28aa1e5b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1920fe25-22b4-4757-b4c3-9dad28aa1e5b" (UID: "1920fe25-22b4-4757-b4c3-9dad28aa1e5b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 17 10:01:24.978873 kubelet[2643]: I0517 10:01:24.978839 2643 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1920fe25-22b4-4757-b4c3-9dad28aa1e5b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1920fe25-22b4-4757-b4c3-9dad28aa1e5b" (UID: "1920fe25-22b4-4757-b4c3-9dad28aa1e5b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 10:01:24.979228 kubelet[2643]: I0517 10:01:24.979202 2643 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1920fe25-22b4-4757-b4c3-9dad28aa1e5b-kube-api-access-mzp22" (OuterVolumeSpecName: "kube-api-access-mzp22") pod "1920fe25-22b4-4757-b4c3-9dad28aa1e5b" (UID: "1920fe25-22b4-4757-b4c3-9dad28aa1e5b"). InnerVolumeSpecName "kube-api-access-mzp22". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 10:01:25.073767 kubelet[2643]: I0517 10:01:25.073727 2643 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1920fe25-22b4-4757-b4c3-9dad28aa1e5b-cilium-run\") on node \"localhost\" DevicePath \"\"" May 17 10:01:25.073767 kubelet[2643]: I0517 10:01:25.073760 2643 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1920fe25-22b4-4757-b4c3-9dad28aa1e5b-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 17 10:01:25.073767 kubelet[2643]: I0517 10:01:25.073772 2643 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1920fe25-22b4-4757-b4c3-9dad28aa1e5b-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 17 10:01:25.073936 kubelet[2643]: I0517 10:01:25.073782 2643 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1920fe25-22b4-4757-b4c3-9dad28aa1e5b-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 17 10:01:25.073936 kubelet[2643]: I0517 10:01:25.073790 2643 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1920fe25-22b4-4757-b4c3-9dad28aa1e5b-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 17 10:01:25.073936 kubelet[2643]: I0517 10:01:25.073799 2643 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1920fe25-22b4-4757-b4c3-9dad28aa1e5b-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 17 10:01:25.073936 kubelet[2643]: I0517 10:01:25.073807 2643 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f25b87e9-cd56-46cd-a07d-45fb46b3797d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 17 10:01:25.073936 kubelet[2643]: I0517 10:01:25.073815 2643 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1920fe25-22b4-4757-b4c3-9dad28aa1e5b-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 17 10:01:25.073936 kubelet[2643]: I0517 10:01:25.073823 2643 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1920fe25-22b4-4757-b4c3-9dad28aa1e5b-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 17 10:01:25.073936 kubelet[2643]: I0517 10:01:25.073830 2643 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1920fe25-22b4-4757-b4c3-9dad28aa1e5b-cni-path\") on node \"localhost\" DevicePath \"\"" May 17 10:01:25.073936 kubelet[2643]: I0517 10:01:25.073838 2643 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mzp22\" (UniqueName: \"kubernetes.io/projected/1920fe25-22b4-4757-b4c3-9dad28aa1e5b-kube-api-access-mzp22\") on node \"localhost\" DevicePath \"\"" May 17 10:01:25.074108 kubelet[2643]: I0517 10:01:25.073846 2643 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1920fe25-22b4-4757-b4c3-9dad28aa1e5b-lib-modules\") on node \"localhost\" DevicePath \"\"" May 17 10:01:25.074108 kubelet[2643]: I0517 10:01:25.073853 2643 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1920fe25-22b4-4757-b4c3-9dad28aa1e5b-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 17 10:01:25.074108 kubelet[2643]: I0517 10:01:25.073861 2643 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1920fe25-22b4-4757-b4c3-9dad28aa1e5b-hostproc\") on node \"localhost\" DevicePath \"\"" May 17 10:01:25.074108 kubelet[2643]: I0517 10:01:25.073868 2643 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1920fe25-22b4-4757-b4c3-9dad28aa1e5b-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 17 10:01:25.074108 kubelet[2643]: I0517 10:01:25.073875 2643 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jvmp6\" (UniqueName: \"kubernetes.io/projected/f25b87e9-cd56-46cd-a07d-45fb46b3797d-kube-api-access-jvmp6\") on node \"localhost\" DevicePath \"\"" May 17 10:01:25.164790 kubelet[2643]: E0517 10:01:25.164746 2643 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 10:01:25.329732 kubelet[2643]: I0517 10:01:25.329705 2643 scope.go:117] "RemoveContainer" containerID="97083f3ba7d92c6dd98c87c8954c151ad9ccbc4e60c09fe88ef67d0792688d20" May 17 10:01:25.331848 containerd[1530]: time="2025-05-17T10:01:25.331795526Z" level=info msg="RemoveContainer for \"97083f3ba7d92c6dd98c87c8954c151ad9ccbc4e60c09fe88ef67d0792688d20\"" May 17 10:01:25.332948 systemd[1]: Removed slice kubepods-besteffort-podf25b87e9_cd56_46cd_a07d_45fb46b3797d.slice - libcontainer container kubepods-besteffort-podf25b87e9_cd56_46cd_a07d_45fb46b3797d.slice. May 17 10:01:25.337741 systemd[1]: Removed slice kubepods-burstable-pod1920fe25_22b4_4757_b4c3_9dad28aa1e5b.slice - libcontainer container kubepods-burstable-pod1920fe25_22b4_4757_b4c3_9dad28aa1e5b.slice. May 17 10:01:25.337837 systemd[1]: kubepods-burstable-pod1920fe25_22b4_4757_b4c3_9dad28aa1e5b.slice: Consumed 6.612s CPU time, 121.7M memory peak, 256K read from disk, 15.2M written to disk. May 17 10:01:25.355743 containerd[1530]: time="2025-05-17T10:01:25.355706326Z" level=info msg="RemoveContainer for \"97083f3ba7d92c6dd98c87c8954c151ad9ccbc4e60c09fe88ef67d0792688d20\" returns successfully" May 17 10:01:25.356029 kubelet[2643]: I0517 10:01:25.355993 2643 scope.go:117] "RemoveContainer" containerID="97083f3ba7d92c6dd98c87c8954c151ad9ccbc4e60c09fe88ef67d0792688d20" May 17 10:01:25.356270 containerd[1530]: time="2025-05-17T10:01:25.356240227Z" level=error msg="ContainerStatus for \"97083f3ba7d92c6dd98c87c8954c151ad9ccbc4e60c09fe88ef67d0792688d20\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"97083f3ba7d92c6dd98c87c8954c151ad9ccbc4e60c09fe88ef67d0792688d20\": not found" May 17 10:01:25.362173 kubelet[2643]: E0517 10:01:25.362143 2643 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"97083f3ba7d92c6dd98c87c8954c151ad9ccbc4e60c09fe88ef67d0792688d20\": not found" containerID="97083f3ba7d92c6dd98c87c8954c151ad9ccbc4e60c09fe88ef67d0792688d20" May 17 10:01:25.366254 kubelet[2643]: I0517 10:01:25.366147 2643 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"97083f3ba7d92c6dd98c87c8954c151ad9ccbc4e60c09fe88ef67d0792688d20"} err="failed to get container status \"97083f3ba7d92c6dd98c87c8954c151ad9ccbc4e60c09fe88ef67d0792688d20\": rpc error: code = NotFound desc = an error occurred when try to find container \"97083f3ba7d92c6dd98c87c8954c151ad9ccbc4e60c09fe88ef67d0792688d20\": not found" May 17 10:01:25.366334 kubelet[2643]: I0517 10:01:25.366261 2643 scope.go:117] "RemoveContainer" containerID="851e5e58609a05b1376b2775237b4fe5500e4ffee367db76120904f0dbaed12d" May 17 10:01:25.368097 containerd[1530]: time="2025-05-17T10:01:25.368065001Z" level=info msg="RemoveContainer for \"851e5e58609a05b1376b2775237b4fe5500e4ffee367db76120904f0dbaed12d\"" May 17 10:01:25.371726 containerd[1530]: time="2025-05-17T10:01:25.371688241Z" level=info msg="RemoveContainer for \"851e5e58609a05b1376b2775237b4fe5500e4ffee367db76120904f0dbaed12d\" returns successfully" May 17 10:01:25.371990 kubelet[2643]: I0517 10:01:25.371973 2643 scope.go:117] "RemoveContainer" containerID="0631445707bc10ad5f67afd575832bd7f30d5a7de2a360869f9b15232f12b0ca" May 17 10:01:25.373539 containerd[1530]: time="2025-05-17T10:01:25.373513759Z" level=info msg="RemoveContainer for \"0631445707bc10ad5f67afd575832bd7f30d5a7de2a360869f9b15232f12b0ca\"" May 17 10:01:25.376787 containerd[1530]: time="2025-05-17T10:01:25.376760681Z" level=info msg="RemoveContainer for \"0631445707bc10ad5f67afd575832bd7f30d5a7de2a360869f9b15232f12b0ca\" returns successfully" May 17 10:01:25.377077 kubelet[2643]: I0517 10:01:25.376952 2643 scope.go:117] "RemoveContainer" containerID="62cbf4955f577dee8c9df8b6026c89970239365efb1fe83c2619fa3f1e9ba2cd" May 17 10:01:25.382983 containerd[1530]: time="2025-05-17T10:01:25.382948238Z" level=info msg="RemoveContainer for \"62cbf4955f577dee8c9df8b6026c89970239365efb1fe83c2619fa3f1e9ba2cd\"" May 17 10:01:25.386794 containerd[1530]: time="2025-05-17T10:01:25.386751298Z" level=info msg="RemoveContainer for \"62cbf4955f577dee8c9df8b6026c89970239365efb1fe83c2619fa3f1e9ba2cd\" returns successfully" May 17 10:01:25.386958 kubelet[2643]: I0517 10:01:25.386935 2643 scope.go:117] "RemoveContainer" containerID="b1f66cc4cb30c2f4e6e0ca2411874645ffedc2acb0172803cade71df0c6a6619" May 17 10:01:25.388458 containerd[1530]: time="2025-05-17T10:01:25.388435752Z" level=info msg="RemoveContainer for \"b1f66cc4cb30c2f4e6e0ca2411874645ffedc2acb0172803cade71df0c6a6619\"" May 17 10:01:25.390958 containerd[1530]: time="2025-05-17T10:01:25.390929316Z" level=info msg="RemoveContainer for \"b1f66cc4cb30c2f4e6e0ca2411874645ffedc2acb0172803cade71df0c6a6619\" returns successfully" May 17 10:01:25.391102 kubelet[2643]: I0517 10:01:25.391067 2643 scope.go:117] "RemoveContainer" containerID="965ba86e9ee419ca8d015f538f8f6a309b415c3ba0531f4733ac9ac93cf200db" May 17 10:01:25.392499 containerd[1530]: time="2025-05-17T10:01:25.392463827Z" level=info msg="RemoveContainer for \"965ba86e9ee419ca8d015f538f8f6a309b415c3ba0531f4733ac9ac93cf200db\"" May 17 10:01:25.395054 containerd[1530]: time="2025-05-17T10:01:25.395025584Z" level=info msg="RemoveContainer for \"965ba86e9ee419ca8d015f538f8f6a309b415c3ba0531f4733ac9ac93cf200db\" returns successfully" May 17 10:01:25.395249 kubelet[2643]: I0517 10:01:25.395216 2643 scope.go:117] "RemoveContainer" containerID="851e5e58609a05b1376b2775237b4fe5500e4ffee367db76120904f0dbaed12d" May 17 10:01:25.395462 containerd[1530]: time="2025-05-17T10:01:25.395430859Z" level=error msg="ContainerStatus for \"851e5e58609a05b1376b2775237b4fe5500e4ffee367db76120904f0dbaed12d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"851e5e58609a05b1376b2775237b4fe5500e4ffee367db76120904f0dbaed12d\": not found" May 17 10:01:25.395613 kubelet[2643]: E0517 10:01:25.395581 2643 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"851e5e58609a05b1376b2775237b4fe5500e4ffee367db76120904f0dbaed12d\": not found" containerID="851e5e58609a05b1376b2775237b4fe5500e4ffee367db76120904f0dbaed12d" May 17 10:01:25.395647 kubelet[2643]: I0517 10:01:25.395612 2643 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"851e5e58609a05b1376b2775237b4fe5500e4ffee367db76120904f0dbaed12d"} err="failed to get container status \"851e5e58609a05b1376b2775237b4fe5500e4ffee367db76120904f0dbaed12d\": rpc error: code = NotFound desc = an error occurred when try to find container \"851e5e58609a05b1376b2775237b4fe5500e4ffee367db76120904f0dbaed12d\": not found" May 17 10:01:25.395647 kubelet[2643]: I0517 10:01:25.395633 2643 scope.go:117] "RemoveContainer" containerID="0631445707bc10ad5f67afd575832bd7f30d5a7de2a360869f9b15232f12b0ca" May 17 10:01:25.395823 containerd[1530]: time="2025-05-17T10:01:25.395791740Z" level=error msg="ContainerStatus for \"0631445707bc10ad5f67afd575832bd7f30d5a7de2a360869f9b15232f12b0ca\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0631445707bc10ad5f67afd575832bd7f30d5a7de2a360869f9b15232f12b0ca\": not found" May 17 10:01:25.395948 kubelet[2643]: E0517 10:01:25.395925 2643 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0631445707bc10ad5f67afd575832bd7f30d5a7de2a360869f9b15232f12b0ca\": not found" containerID="0631445707bc10ad5f67afd575832bd7f30d5a7de2a360869f9b15232f12b0ca" May 17 10:01:25.395985 kubelet[2643]: I0517 10:01:25.395954 2643 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0631445707bc10ad5f67afd575832bd7f30d5a7de2a360869f9b15232f12b0ca"} err="failed to get container status \"0631445707bc10ad5f67afd575832bd7f30d5a7de2a360869f9b15232f12b0ca\": rpc error: code = NotFound desc = an error occurred when try to find container \"0631445707bc10ad5f67afd575832bd7f30d5a7de2a360869f9b15232f12b0ca\": not found" May 17 10:01:25.395985 kubelet[2643]: I0517 10:01:25.395972 2643 scope.go:117] "RemoveContainer" containerID="62cbf4955f577dee8c9df8b6026c89970239365efb1fe83c2619fa3f1e9ba2cd" May 17 10:01:25.396164 containerd[1530]: time="2025-05-17T10:01:25.396134742Z" level=error msg="ContainerStatus for \"62cbf4955f577dee8c9df8b6026c89970239365efb1fe83c2619fa3f1e9ba2cd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"62cbf4955f577dee8c9df8b6026c89970239365efb1fe83c2619fa3f1e9ba2cd\": not found" May 17 10:01:25.396346 kubelet[2643]: E0517 10:01:25.396292 2643 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"62cbf4955f577dee8c9df8b6026c89970239365efb1fe83c2619fa3f1e9ba2cd\": not found" containerID="62cbf4955f577dee8c9df8b6026c89970239365efb1fe83c2619fa3f1e9ba2cd" May 17 10:01:25.396375 kubelet[2643]: I0517 10:01:25.396349 2643 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"62cbf4955f577dee8c9df8b6026c89970239365efb1fe83c2619fa3f1e9ba2cd"} err="failed to get container status \"62cbf4955f577dee8c9df8b6026c89970239365efb1fe83c2619fa3f1e9ba2cd\": rpc error: code = NotFound desc = an error occurred when try to find container \"62cbf4955f577dee8c9df8b6026c89970239365efb1fe83c2619fa3f1e9ba2cd\": not found" May 17 10:01:25.396375 kubelet[2643]: I0517 10:01:25.396368 2643 scope.go:117] "RemoveContainer" containerID="b1f66cc4cb30c2f4e6e0ca2411874645ffedc2acb0172803cade71df0c6a6619" May 17 10:01:25.396615 containerd[1530]: time="2025-05-17T10:01:25.396584732Z" level=error msg="ContainerStatus for \"b1f66cc4cb30c2f4e6e0ca2411874645ffedc2acb0172803cade71df0c6a6619\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b1f66cc4cb30c2f4e6e0ca2411874645ffedc2acb0172803cade71df0c6a6619\": not found" May 17 10:01:25.396742 kubelet[2643]: E0517 10:01:25.396717 2643 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b1f66cc4cb30c2f4e6e0ca2411874645ffedc2acb0172803cade71df0c6a6619\": not found" containerID="b1f66cc4cb30c2f4e6e0ca2411874645ffedc2acb0172803cade71df0c6a6619" May 17 10:01:25.396779 kubelet[2643]: I0517 10:01:25.396750 2643 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b1f66cc4cb30c2f4e6e0ca2411874645ffedc2acb0172803cade71df0c6a6619"} err="failed to get container status \"b1f66cc4cb30c2f4e6e0ca2411874645ffedc2acb0172803cade71df0c6a6619\": rpc error: code = NotFound desc = an error occurred when try to find container \"b1f66cc4cb30c2f4e6e0ca2411874645ffedc2acb0172803cade71df0c6a6619\": not found" May 17 10:01:25.396779 kubelet[2643]: I0517 10:01:25.396765 2643 scope.go:117] "RemoveContainer" containerID="965ba86e9ee419ca8d015f538f8f6a309b415c3ba0531f4733ac9ac93cf200db" May 17 10:01:25.396959 containerd[1530]: time="2025-05-17T10:01:25.396933093Z" level=error msg="ContainerStatus for \"965ba86e9ee419ca8d015f538f8f6a309b415c3ba0531f4733ac9ac93cf200db\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"965ba86e9ee419ca8d015f538f8f6a309b415c3ba0531f4733ac9ac93cf200db\": not found" May 17 10:01:25.397074 kubelet[2643]: E0517 10:01:25.397057 2643 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"965ba86e9ee419ca8d015f538f8f6a309b415c3ba0531f4733ac9ac93cf200db\": not found" containerID="965ba86e9ee419ca8d015f538f8f6a309b415c3ba0531f4733ac9ac93cf200db" May 17 10:01:25.397107 kubelet[2643]: I0517 10:01:25.397079 2643 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"965ba86e9ee419ca8d015f538f8f6a309b415c3ba0531f4733ac9ac93cf200db"} err="failed to get container status \"965ba86e9ee419ca8d015f538f8f6a309b415c3ba0531f4733ac9ac93cf200db\": rpc error: code = NotFound desc = an error occurred when try to find container \"965ba86e9ee419ca8d015f538f8f6a309b415c3ba0531f4733ac9ac93cf200db\": not found" May 17 10:01:25.727478 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-409292bd8d24d9b334e9d431a35ace55a101fb6f39941e6c60befb79ca895697-shm.mount: Deactivated successfully. May 17 10:01:25.727589 systemd[1]: var-lib-kubelet-pods-f25b87e9\x2dcd56\x2d46cd\x2da07d\x2d45fb46b3797d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djvmp6.mount: Deactivated successfully. May 17 10:01:25.727641 systemd[1]: var-lib-kubelet-pods-1920fe25\x2d22b4\x2d4757\x2db4c3\x2d9dad28aa1e5b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmzp22.mount: Deactivated successfully. May 17 10:01:25.727698 systemd[1]: var-lib-kubelet-pods-1920fe25\x2d22b4\x2d4757\x2db4c3\x2d9dad28aa1e5b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 17 10:01:25.727747 systemd[1]: var-lib-kubelet-pods-1920fe25\x2d22b4\x2d4757\x2db4c3\x2d9dad28aa1e5b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 17 10:01:26.096810 kubelet[2643]: I0517 10:01:26.096765 2643 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1920fe25-22b4-4757-b4c3-9dad28aa1e5b" path="/var/lib/kubelet/pods/1920fe25-22b4-4757-b4c3-9dad28aa1e5b/volumes" May 17 10:01:26.097355 kubelet[2643]: I0517 10:01:26.097327 2643 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f25b87e9-cd56-46cd-a07d-45fb46b3797d" path="/var/lib/kubelet/pods/f25b87e9-cd56-46cd-a07d-45fb46b3797d/volumes" May 17 10:01:26.629905 sshd[4238]: Connection closed by 10.0.0.1 port 36876 May 17 10:01:26.630287 sshd-session[4236]: pam_unix(sshd:session): session closed for user core May 17 10:01:26.639409 systemd[1]: sshd@22-10.0.0.72:22-10.0.0.1:36876.service: Deactivated successfully. May 17 10:01:26.640764 systemd[1]: session-23.scope: Deactivated successfully. May 17 10:01:26.641464 systemd-logind[1516]: Session 23 logged out. Waiting for processes to exit. May 17 10:01:26.644870 systemd-logind[1516]: Removed session 23. May 17 10:01:26.645936 systemd[1]: Started sshd@23-10.0.0.72:22-10.0.0.1:36878.service - OpenSSH per-connection server daemon (10.0.0.1:36878). May 17 10:01:26.699768 sshd[4393]: Accepted publickey for core from 10.0.0.1 port 36878 ssh2: RSA SHA256:xWUFGIGJGo+HJme0dpHyBaxVmN4GTw4PLZEwYhuGsaQ May 17 10:01:26.700918 sshd-session[4393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 10:01:26.704713 systemd-logind[1516]: New session 24 of user core. May 17 10:01:26.718635 systemd[1]: Started session-24.scope - Session 24 of User core. May 17 10:01:27.548831 sshd[4395]: Connection closed by 10.0.0.1 port 36878 May 17 10:01:27.551295 sshd-session[4393]: pam_unix(sshd:session): session closed for user core May 17 10:01:27.561918 systemd[1]: sshd@23-10.0.0.72:22-10.0.0.1:36878.service: Deactivated successfully. May 17 10:01:27.565510 kubelet[2643]: I0517 10:01:27.563163 2643 memory_manager.go:355] "RemoveStaleState removing state" podUID="f25b87e9-cd56-46cd-a07d-45fb46b3797d" containerName="cilium-operator" May 17 10:01:27.565510 kubelet[2643]: I0517 10:01:27.563189 2643 memory_manager.go:355] "RemoveStaleState removing state" podUID="1920fe25-22b4-4757-b4c3-9dad28aa1e5b" containerName="cilium-agent" May 17 10:01:27.565166 systemd[1]: session-24.scope: Deactivated successfully. May 17 10:01:27.566977 systemd-logind[1516]: Session 24 logged out. Waiting for processes to exit. May 17 10:01:27.572788 kubelet[2643]: W0517 10:01:27.572673 2643 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 17 10:01:27.576696 systemd[1]: Started sshd@24-10.0.0.72:22-10.0.0.1:36880.service - OpenSSH per-connection server daemon (10.0.0.1:36880). May 17 10:01:27.580045 systemd-logind[1516]: Removed session 24. May 17 10:01:27.583379 kubelet[2643]: E0517 10:01:27.583049 2643 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" May 17 10:01:27.589255 systemd[1]: Created slice kubepods-burstable-podc3e3c902_0925_414f_956e_b72b4b26b03c.slice - libcontainer container kubepods-burstable-podc3e3c902_0925_414f_956e_b72b4b26b03c.slice. May 17 10:01:27.634965 sshd[4407]: Accepted publickey for core from 10.0.0.1 port 36880 ssh2: RSA SHA256:xWUFGIGJGo+HJme0dpHyBaxVmN4GTw4PLZEwYhuGsaQ May 17 10:01:27.636306 sshd-session[4407]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 10:01:27.640364 systemd-logind[1516]: New session 25 of user core. May 17 10:01:27.650668 systemd[1]: Started session-25.scope - Session 25 of User core. May 17 10:01:27.693526 kubelet[2643]: I0517 10:01:27.693477 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c3e3c902-0925-414f-956e-b72b4b26b03c-cilium-cgroup\") pod \"cilium-bmcvg\" (UID: \"c3e3c902-0925-414f-956e-b72b4b26b03c\") " pod="kube-system/cilium-bmcvg" May 17 10:01:27.693781 kubelet[2643]: I0517 10:01:27.693644 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c3e3c902-0925-414f-956e-b72b4b26b03c-cilium-config-path\") pod \"cilium-bmcvg\" (UID: \"c3e3c902-0925-414f-956e-b72b4b26b03c\") " pod="kube-system/cilium-bmcvg" May 17 10:01:27.693781 kubelet[2643]: I0517 10:01:27.693670 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4frrd\" (UniqueName: \"kubernetes.io/projected/c3e3c902-0925-414f-956e-b72b4b26b03c-kube-api-access-4frrd\") pod \"cilium-bmcvg\" (UID: \"c3e3c902-0925-414f-956e-b72b4b26b03c\") " pod="kube-system/cilium-bmcvg" May 17 10:01:27.693781 kubelet[2643]: I0517 10:01:27.693688 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c3e3c902-0925-414f-956e-b72b4b26b03c-host-proc-sys-net\") pod \"cilium-bmcvg\" (UID: \"c3e3c902-0925-414f-956e-b72b4b26b03c\") " pod="kube-system/cilium-bmcvg" May 17 10:01:27.693781 kubelet[2643]: I0517 10:01:27.693706 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c3e3c902-0925-414f-956e-b72b4b26b03c-bpf-maps\") pod \"cilium-bmcvg\" (UID: \"c3e3c902-0925-414f-956e-b72b4b26b03c\") " pod="kube-system/cilium-bmcvg" May 17 10:01:27.693781 kubelet[2643]: I0517 10:01:27.693721 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c3e3c902-0925-414f-956e-b72b4b26b03c-cilium-ipsec-secrets\") pod \"cilium-bmcvg\" (UID: \"c3e3c902-0925-414f-956e-b72b4b26b03c\") " pod="kube-system/cilium-bmcvg" May 17 10:01:27.693912 kubelet[2643]: I0517 10:01:27.693736 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c3e3c902-0925-414f-956e-b72b4b26b03c-lib-modules\") pod \"cilium-bmcvg\" (UID: \"c3e3c902-0925-414f-956e-b72b4b26b03c\") " pod="kube-system/cilium-bmcvg" May 17 10:01:27.693912 kubelet[2643]: I0517 10:01:27.693750 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c3e3c902-0925-414f-956e-b72b4b26b03c-xtables-lock\") pod \"cilium-bmcvg\" (UID: \"c3e3c902-0925-414f-956e-b72b4b26b03c\") " pod="kube-system/cilium-bmcvg" May 17 10:01:27.693912 kubelet[2643]: I0517 10:01:27.693767 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c3e3c902-0925-414f-956e-b72b4b26b03c-host-proc-sys-kernel\") pod \"cilium-bmcvg\" (UID: \"c3e3c902-0925-414f-956e-b72b4b26b03c\") " pod="kube-system/cilium-bmcvg" May 17 10:01:27.693912 kubelet[2643]: I0517 10:01:27.693837 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c3e3c902-0925-414f-956e-b72b4b26b03c-cni-path\") pod \"cilium-bmcvg\" (UID: \"c3e3c902-0925-414f-956e-b72b4b26b03c\") " pod="kube-system/cilium-bmcvg" May 17 10:01:27.693912 kubelet[2643]: I0517 10:01:27.693871 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c3e3c902-0925-414f-956e-b72b4b26b03c-etc-cni-netd\") pod \"cilium-bmcvg\" (UID: \"c3e3c902-0925-414f-956e-b72b4b26b03c\") " pod="kube-system/cilium-bmcvg" May 17 10:01:27.693912 kubelet[2643]: I0517 10:01:27.693895 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c3e3c902-0925-414f-956e-b72b4b26b03c-hubble-tls\") pod \"cilium-bmcvg\" (UID: \"c3e3c902-0925-414f-956e-b72b4b26b03c\") " pod="kube-system/cilium-bmcvg" May 17 10:01:27.694034 kubelet[2643]: I0517 10:01:27.693918 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c3e3c902-0925-414f-956e-b72b4b26b03c-cilium-run\") pod \"cilium-bmcvg\" (UID: \"c3e3c902-0925-414f-956e-b72b4b26b03c\") " pod="kube-system/cilium-bmcvg" May 17 10:01:27.694034 kubelet[2643]: I0517 10:01:27.693933 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c3e3c902-0925-414f-956e-b72b4b26b03c-hostproc\") pod \"cilium-bmcvg\" (UID: \"c3e3c902-0925-414f-956e-b72b4b26b03c\") " pod="kube-system/cilium-bmcvg" May 17 10:01:27.694034 kubelet[2643]: I0517 10:01:27.693951 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c3e3c902-0925-414f-956e-b72b4b26b03c-clustermesh-secrets\") pod \"cilium-bmcvg\" (UID: \"c3e3c902-0925-414f-956e-b72b4b26b03c\") " pod="kube-system/cilium-bmcvg" May 17 10:01:27.699276 sshd[4409]: Connection closed by 10.0.0.1 port 36880 May 17 10:01:27.699579 sshd-session[4407]: pam_unix(sshd:session): session closed for user core May 17 10:01:27.710786 systemd[1]: sshd@24-10.0.0.72:22-10.0.0.1:36880.service: Deactivated successfully. May 17 10:01:27.712350 systemd[1]: session-25.scope: Deactivated successfully. May 17 10:01:27.714235 systemd-logind[1516]: Session 25 logged out. Waiting for processes to exit. May 17 10:01:27.717080 systemd[1]: Started sshd@25-10.0.0.72:22-10.0.0.1:36888.service - OpenSSH per-connection server daemon (10.0.0.1:36888). May 17 10:01:27.719127 systemd-logind[1516]: Removed session 25. May 17 10:01:27.770816 sshd[4416]: Accepted publickey for core from 10.0.0.1 port 36888 ssh2: RSA SHA256:xWUFGIGJGo+HJme0dpHyBaxVmN4GTw4PLZEwYhuGsaQ May 17 10:01:27.771936 sshd-session[4416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 10:01:27.775804 systemd-logind[1516]: New session 26 of user core. May 17 10:01:27.786654 systemd[1]: Started session-26.scope - Session 26 of User core. May 17 10:01:28.799188 kubelet[2643]: E0517 10:01:28.799156 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:01:28.799846 containerd[1530]: time="2025-05-17T10:01:28.799722079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bmcvg,Uid:c3e3c902-0925-414f-956e-b72b4b26b03c,Namespace:kube-system,Attempt:0,}" May 17 10:01:28.813002 containerd[1530]: time="2025-05-17T10:01:28.812957219Z" level=info msg="connecting to shim 089dc3c618d22d8bf7036e4e08d7f03cb7411c9e907deabbb44f84d23abf1e98" address="unix:///run/containerd/s/f1b4b2fb5b434a425e54097af350284466eeb65cb59379f91b26bcb1d14358c8" namespace=k8s.io protocol=ttrpc version=3 May 17 10:01:28.842908 systemd[1]: Started cri-containerd-089dc3c618d22d8bf7036e4e08d7f03cb7411c9e907deabbb44f84d23abf1e98.scope - libcontainer container 089dc3c618d22d8bf7036e4e08d7f03cb7411c9e907deabbb44f84d23abf1e98. May 17 10:01:28.868384 containerd[1530]: time="2025-05-17T10:01:28.868340394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bmcvg,Uid:c3e3c902-0925-414f-956e-b72b4b26b03c,Namespace:kube-system,Attempt:0,} returns sandbox id \"089dc3c618d22d8bf7036e4e08d7f03cb7411c9e907deabbb44f84d23abf1e98\"" May 17 10:01:28.869333 kubelet[2643]: E0517 10:01:28.869309 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:01:28.872466 containerd[1530]: time="2025-05-17T10:01:28.872350944Z" level=info msg="CreateContainer within sandbox \"089dc3c618d22d8bf7036e4e08d7f03cb7411c9e907deabbb44f84d23abf1e98\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 10:01:28.890707 containerd[1530]: time="2025-05-17T10:01:28.890668456Z" level=info msg="Container dd9fe1147873ac5eed3c02124a793bd163e029720fa2a1d0062122405a638e78: CDI devices from CRI Config.CDIDevices: []" May 17 10:01:28.894717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount619359343.mount: Deactivated successfully. May 17 10:01:28.899400 containerd[1530]: time="2025-05-17T10:01:28.899357895Z" level=info msg="CreateContainer within sandbox \"089dc3c618d22d8bf7036e4e08d7f03cb7411c9e907deabbb44f84d23abf1e98\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"dd9fe1147873ac5eed3c02124a793bd163e029720fa2a1d0062122405a638e78\"" May 17 10:01:28.899969 containerd[1530]: time="2025-05-17T10:01:28.899927123Z" level=info msg="StartContainer for \"dd9fe1147873ac5eed3c02124a793bd163e029720fa2a1d0062122405a638e78\"" May 17 10:01:28.900867 containerd[1530]: time="2025-05-17T10:01:28.900843998Z" level=info msg="connecting to shim dd9fe1147873ac5eed3c02124a793bd163e029720fa2a1d0062122405a638e78" address="unix:///run/containerd/s/f1b4b2fb5b434a425e54097af350284466eeb65cb59379f91b26bcb1d14358c8" protocol=ttrpc version=3 May 17 10:01:28.928651 systemd[1]: Started cri-containerd-dd9fe1147873ac5eed3c02124a793bd163e029720fa2a1d0062122405a638e78.scope - libcontainer container dd9fe1147873ac5eed3c02124a793bd163e029720fa2a1d0062122405a638e78. May 17 10:01:28.951728 containerd[1530]: time="2025-05-17T10:01:28.951683672Z" level=info msg="StartContainer for \"dd9fe1147873ac5eed3c02124a793bd163e029720fa2a1d0062122405a638e78\" returns successfully" May 17 10:01:28.963912 systemd[1]: cri-containerd-dd9fe1147873ac5eed3c02124a793bd163e029720fa2a1d0062122405a638e78.scope: Deactivated successfully. May 17 10:01:28.969937 containerd[1530]: time="2025-05-17T10:01:28.969889754Z" level=info msg="received exit event container_id:\"dd9fe1147873ac5eed3c02124a793bd163e029720fa2a1d0062122405a638e78\" id:\"dd9fe1147873ac5eed3c02124a793bd163e029720fa2a1d0062122405a638e78\" pid:4489 exited_at:{seconds:1747476088 nanos:969583102}" May 17 10:01:28.970060 containerd[1530]: time="2025-05-17T10:01:28.969906192Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dd9fe1147873ac5eed3c02124a793bd163e029720fa2a1d0062122405a638e78\" id:\"dd9fe1147873ac5eed3c02124a793bd163e029720fa2a1d0062122405a638e78\" pid:4489 exited_at:{seconds:1747476088 nanos:969583102}" May 17 10:01:29.094177 kubelet[2643]: E0517 10:01:29.094135 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:01:29.342404 kubelet[2643]: E0517 10:01:29.342358 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:01:29.346001 containerd[1530]: time="2025-05-17T10:01:29.345882588Z" level=info msg="CreateContainer within sandbox \"089dc3c618d22d8bf7036e4e08d7f03cb7411c9e907deabbb44f84d23abf1e98\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 10:01:29.353448 containerd[1530]: time="2025-05-17T10:01:29.353290748Z" level=info msg="Container dfe93ab3477ec487f355963a653e62cf4e1cab190f588c0dd534c0a885f1bcd5: CDI devices from CRI Config.CDIDevices: []" May 17 10:01:29.359331 containerd[1530]: time="2025-05-17T10:01:29.358964337Z" level=info msg="CreateContainer within sandbox \"089dc3c618d22d8bf7036e4e08d7f03cb7411c9e907deabbb44f84d23abf1e98\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"dfe93ab3477ec487f355963a653e62cf4e1cab190f588c0dd534c0a885f1bcd5\"" May 17 10:01:29.360143 containerd[1530]: time="2025-05-17T10:01:29.360103359Z" level=info msg="StartContainer for \"dfe93ab3477ec487f355963a653e62cf4e1cab190f588c0dd534c0a885f1bcd5\"" May 17 10:01:29.361013 containerd[1530]: time="2025-05-17T10:01:29.360982083Z" level=info msg="connecting to shim dfe93ab3477ec487f355963a653e62cf4e1cab190f588c0dd534c0a885f1bcd5" address="unix:///run/containerd/s/f1b4b2fb5b434a425e54097af350284466eeb65cb59379f91b26bcb1d14358c8" protocol=ttrpc version=3 May 17 10:01:29.384230 systemd[1]: Started cri-containerd-dfe93ab3477ec487f355963a653e62cf4e1cab190f588c0dd534c0a885f1bcd5.scope - libcontainer container dfe93ab3477ec487f355963a653e62cf4e1cab190f588c0dd534c0a885f1bcd5. May 17 10:01:29.418817 containerd[1530]: time="2025-05-17T10:01:29.418780245Z" level=info msg="StartContainer for \"dfe93ab3477ec487f355963a653e62cf4e1cab190f588c0dd534c0a885f1bcd5\" returns successfully" May 17 10:01:29.425470 systemd[1]: cri-containerd-dfe93ab3477ec487f355963a653e62cf4e1cab190f588c0dd534c0a885f1bcd5.scope: Deactivated successfully. May 17 10:01:29.427373 containerd[1530]: time="2025-05-17T10:01:29.427337665Z" level=info msg="received exit event container_id:\"dfe93ab3477ec487f355963a653e62cf4e1cab190f588c0dd534c0a885f1bcd5\" id:\"dfe93ab3477ec487f355963a653e62cf4e1cab190f588c0dd534c0a885f1bcd5\" pid:4535 exited_at:{seconds:1747476089 nanos:427048930}" May 17 10:01:29.427631 containerd[1530]: time="2025-05-17T10:01:29.427349344Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dfe93ab3477ec487f355963a653e62cf4e1cab190f588c0dd534c0a885f1bcd5\" id:\"dfe93ab3477ec487f355963a653e62cf4e1cab190f588c0dd534c0a885f1bcd5\" pid:4535 exited_at:{seconds:1747476089 nanos:427048930}" May 17 10:01:30.166459 kubelet[2643]: E0517 10:01:30.166410 2643 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 10:01:30.346881 kubelet[2643]: E0517 10:01:30.346847 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:01:30.349670 containerd[1530]: time="2025-05-17T10:01:30.349626438Z" level=info msg="CreateContainer within sandbox \"089dc3c618d22d8bf7036e4e08d7f03cb7411c9e907deabbb44f84d23abf1e98\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 10:01:30.363965 containerd[1530]: time="2025-05-17T10:01:30.360184784Z" level=info msg="Container 6f5b1b1fe95ea428215b14c2e4098811b2d78dfcb40e4c99cc41317b5c540f9a: CDI devices from CRI Config.CDIDevices: []" May 17 10:01:30.371033 containerd[1530]: time="2025-05-17T10:01:30.370995389Z" level=info msg="CreateContainer within sandbox \"089dc3c618d22d8bf7036e4e08d7f03cb7411c9e907deabbb44f84d23abf1e98\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6f5b1b1fe95ea428215b14c2e4098811b2d78dfcb40e4c99cc41317b5c540f9a\"" May 17 10:01:30.371565 containerd[1530]: time="2025-05-17T10:01:30.371520906Z" level=info msg="StartContainer for \"6f5b1b1fe95ea428215b14c2e4098811b2d78dfcb40e4c99cc41317b5c540f9a\"" May 17 10:01:30.373767 containerd[1530]: time="2025-05-17T10:01:30.373730967Z" level=info msg="connecting to shim 6f5b1b1fe95ea428215b14c2e4098811b2d78dfcb40e4c99cc41317b5c540f9a" address="unix:///run/containerd/s/f1b4b2fb5b434a425e54097af350284466eeb65cb59379f91b26bcb1d14358c8" protocol=ttrpc version=3 May 17 10:01:30.394659 systemd[1]: Started cri-containerd-6f5b1b1fe95ea428215b14c2e4098811b2d78dfcb40e4c99cc41317b5c540f9a.scope - libcontainer container 6f5b1b1fe95ea428215b14c2e4098811b2d78dfcb40e4c99cc41317b5c540f9a. May 17 10:01:30.431005 containerd[1530]: time="2025-05-17T10:01:30.430611244Z" level=info msg="StartContainer for \"6f5b1b1fe95ea428215b14c2e4098811b2d78dfcb40e4c99cc41317b5c540f9a\" returns successfully" May 17 10:01:30.430750 systemd[1]: cri-containerd-6f5b1b1fe95ea428215b14c2e4098811b2d78dfcb40e4c99cc41317b5c540f9a.scope: Deactivated successfully. May 17 10:01:30.432536 containerd[1530]: time="2025-05-17T10:01:30.431391181Z" level=info msg="received exit event container_id:\"6f5b1b1fe95ea428215b14c2e4098811b2d78dfcb40e4c99cc41317b5c540f9a\" id:\"6f5b1b1fe95ea428215b14c2e4098811b2d78dfcb40e4c99cc41317b5c540f9a\" pid:4580 exited_at:{seconds:1747476090 nanos:431232433}" May 17 10:01:30.432536 containerd[1530]: time="2025-05-17T10:01:30.431481973Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6f5b1b1fe95ea428215b14c2e4098811b2d78dfcb40e4c99cc41317b5c540f9a\" id:\"6f5b1b1fe95ea428215b14c2e4098811b2d78dfcb40e4c99cc41317b5c540f9a\" pid:4580 exited_at:{seconds:1747476090 nanos:431232433}" May 17 10:01:30.449639 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f5b1b1fe95ea428215b14c2e4098811b2d78dfcb40e4c99cc41317b5c540f9a-rootfs.mount: Deactivated successfully. May 17 10:01:31.362429 kubelet[2643]: E0517 10:01:31.361970 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:01:31.364519 containerd[1530]: time="2025-05-17T10:01:31.364408455Z" level=info msg="CreateContainer within sandbox \"089dc3c618d22d8bf7036e4e08d7f03cb7411c9e907deabbb44f84d23abf1e98\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 10:01:31.370940 containerd[1530]: time="2025-05-17T10:01:31.370911203Z" level=info msg="Container b7c0e36299b80aadbf27a7ce66c632e4856362b46f747d934dfb7f2832d1500b: CDI devices from CRI Config.CDIDevices: []" May 17 10:01:31.380156 containerd[1530]: time="2025-05-17T10:01:31.380127746Z" level=info msg="CreateContainer within sandbox \"089dc3c618d22d8bf7036e4e08d7f03cb7411c9e907deabbb44f84d23abf1e98\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b7c0e36299b80aadbf27a7ce66c632e4856362b46f747d934dfb7f2832d1500b\"" May 17 10:01:31.380558 containerd[1530]: time="2025-05-17T10:01:31.380524316Z" level=info msg="StartContainer for \"b7c0e36299b80aadbf27a7ce66c632e4856362b46f747d934dfb7f2832d1500b\"" May 17 10:01:31.381264 containerd[1530]: time="2025-05-17T10:01:31.381240262Z" level=info msg="connecting to shim b7c0e36299b80aadbf27a7ce66c632e4856362b46f747d934dfb7f2832d1500b" address="unix:///run/containerd/s/f1b4b2fb5b434a425e54097af350284466eeb65cb59379f91b26bcb1d14358c8" protocol=ttrpc version=3 May 17 10:01:31.403633 systemd[1]: Started cri-containerd-b7c0e36299b80aadbf27a7ce66c632e4856362b46f747d934dfb7f2832d1500b.scope - libcontainer container b7c0e36299b80aadbf27a7ce66c632e4856362b46f747d934dfb7f2832d1500b. May 17 10:01:31.425456 systemd[1]: cri-containerd-b7c0e36299b80aadbf27a7ce66c632e4856362b46f747d934dfb7f2832d1500b.scope: Deactivated successfully. May 17 10:01:31.426651 containerd[1530]: time="2025-05-17T10:01:31.426547558Z" level=info msg="received exit event container_id:\"b7c0e36299b80aadbf27a7ce66c632e4856362b46f747d934dfb7f2832d1500b\" id:\"b7c0e36299b80aadbf27a7ce66c632e4856362b46f747d934dfb7f2832d1500b\" pid:4618 exited_at:{seconds:1747476091 nanos:426261020}" May 17 10:01:31.426738 containerd[1530]: time="2025-05-17T10:01:31.426636111Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b7c0e36299b80aadbf27a7ce66c632e4856362b46f747d934dfb7f2832d1500b\" id:\"b7c0e36299b80aadbf27a7ce66c632e4856362b46f747d934dfb7f2832d1500b\" pid:4618 exited_at:{seconds:1747476091 nanos:426261020}" May 17 10:01:31.432834 containerd[1530]: time="2025-05-17T10:01:31.432808285Z" level=info msg="StartContainer for \"b7c0e36299b80aadbf27a7ce66c632e4856362b46f747d934dfb7f2832d1500b\" returns successfully" May 17 10:01:31.443447 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b7c0e36299b80aadbf27a7ce66c632e4856362b46f747d934dfb7f2832d1500b-rootfs.mount: Deactivated successfully. May 17 10:01:32.028960 kubelet[2643]: I0517 10:01:32.028916 2643 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-17T10:01:32Z","lastTransitionTime":"2025-05-17T10:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 17 10:01:32.367265 kubelet[2643]: E0517 10:01:32.367234 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:01:32.377323 containerd[1530]: time="2025-05-17T10:01:32.377280845Z" level=info msg="CreateContainer within sandbox \"089dc3c618d22d8bf7036e4e08d7f03cb7411c9e907deabbb44f84d23abf1e98\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 10:01:32.389425 containerd[1530]: time="2025-05-17T10:01:32.389388993Z" level=info msg="Container 0eea90716b49984d7d67827e401db7e8e5173f974404be6234556579b82817f7: CDI devices from CRI Config.CDIDevices: []" May 17 10:01:32.396946 containerd[1530]: time="2025-05-17T10:01:32.396896264Z" level=info msg="CreateContainer within sandbox \"089dc3c618d22d8bf7036e4e08d7f03cb7411c9e907deabbb44f84d23abf1e98\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0eea90716b49984d7d67827e401db7e8e5173f974404be6234556579b82817f7\"" May 17 10:01:32.397624 containerd[1530]: time="2025-05-17T10:01:32.397593735Z" level=info msg="StartContainer for \"0eea90716b49984d7d67827e401db7e8e5173f974404be6234556579b82817f7\"" May 17 10:01:32.398458 containerd[1530]: time="2025-05-17T10:01:32.398332403Z" level=info msg="connecting to shim 0eea90716b49984d7d67827e401db7e8e5173f974404be6234556579b82817f7" address="unix:///run/containerd/s/f1b4b2fb5b434a425e54097af350284466eeb65cb59379f91b26bcb1d14358c8" protocol=ttrpc version=3 May 17 10:01:32.422664 systemd[1]: Started cri-containerd-0eea90716b49984d7d67827e401db7e8e5173f974404be6234556579b82817f7.scope - libcontainer container 0eea90716b49984d7d67827e401db7e8e5173f974404be6234556579b82817f7. May 17 10:01:32.456378 containerd[1530]: time="2025-05-17T10:01:32.456339200Z" level=info msg="StartContainer for \"0eea90716b49984d7d67827e401db7e8e5173f974404be6234556579b82817f7\" returns successfully" May 17 10:01:32.508746 containerd[1530]: time="2025-05-17T10:01:32.508700234Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0eea90716b49984d7d67827e401db7e8e5173f974404be6234556579b82817f7\" id:\"4279efcd48493a1bbd80e2ec7ff52ef66f7a0fbac9b38f7e2aed7edf4bcecd0f\" pid:4686 exited_at:{seconds:1747476092 nanos:508360338}" May 17 10:01:32.709514 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 17 10:01:33.374041 kubelet[2643]: E0517 10:01:33.373939 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:01:34.095665 kubelet[2643]: E0517 10:01:34.095124 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:01:34.119706 containerd[1530]: time="2025-05-17T10:01:34.119655937Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0eea90716b49984d7d67827e401db7e8e5173f974404be6234556579b82817f7\" id:\"01089658766427e661de70fdbe9ba93a5d045b9aaa19dffe55e475dd27487517\" pid:4795 exit_status:1 exited_at:{seconds:1747476094 nanos:119245362}" May 17 10:01:34.800799 kubelet[2643]: E0517 10:01:34.800731 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:01:35.532445 systemd-networkd[1440]: lxc_health: Link UP May 17 10:01:35.533587 systemd-networkd[1440]: lxc_health: Gained carrier May 17 10:01:36.271875 containerd[1530]: time="2025-05-17T10:01:36.271828630Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0eea90716b49984d7d67827e401db7e8e5173f974404be6234556579b82817f7\" id:\"fe24aae4ec5644e80fccbb50da50383bc3b1c5351ef3d53e59adf931a212a4ea\" pid:5221 exited_at:{seconds:1747476096 nanos:270842120}" May 17 10:01:36.801830 kubelet[2643]: E0517 10:01:36.801778 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:01:36.817707 kubelet[2643]: I0517 10:01:36.817645 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bmcvg" podStartSLOduration=9.817627395 podStartE2EDuration="9.817627395s" podCreationTimestamp="2025-05-17 10:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 10:01:33.389195633 +0000 UTC m=+83.392329773" watchObservedRunningTime="2025-05-17 10:01:36.817627395 +0000 UTC m=+86.820761415" May 17 10:01:37.143733 systemd-networkd[1440]: lxc_health: Gained IPv6LL May 17 10:01:37.380108 kubelet[2643]: E0517 10:01:37.380077 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:01:38.391025 containerd[1530]: time="2025-05-17T10:01:38.390362580Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0eea90716b49984d7d67827e401db7e8e5173f974404be6234556579b82817f7\" id:\"bc5414c02941c1fd3729611d3233476c4f8eb1f9bb9f02199011b78800e3ec36\" pid:5256 exited_at:{seconds:1747476098 nanos:389435939}" May 17 10:01:40.490523 containerd[1530]: time="2025-05-17T10:01:40.490419625Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0eea90716b49984d7d67827e401db7e8e5173f974404be6234556579b82817f7\" id:\"030b128f6bf06db143cd895975269208d1b7f4a0b78710a3fe7464af2717e554\" pid:5285 exited_at:{seconds:1747476100 nanos:490133714}" May 17 10:01:42.598701 containerd[1530]: time="2025-05-17T10:01:42.598663854Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0eea90716b49984d7d67827e401db7e8e5173f974404be6234556579b82817f7\" id:\"f81bfcb61392444c2271db3cdef31deb64fcbfd8980ba2c3c116d3f4f5311e2d\" pid:5309 exited_at:{seconds:1747476102 nanos:598329183}" May 17 10:01:42.613455 sshd[4418]: Connection closed by 10.0.0.1 port 36888 May 17 10:01:42.614089 sshd-session[4416]: pam_unix(sshd:session): session closed for user core May 17 10:01:42.617582 systemd[1]: sshd@25-10.0.0.72:22-10.0.0.1:36888.service: Deactivated successfully. May 17 10:01:42.619353 systemd[1]: session-26.scope: Deactivated successfully. May 17 10:01:42.620087 systemd-logind[1516]: Session 26 logged out. Waiting for processes to exit. May 17 10:01:42.621195 systemd-logind[1516]: Removed session 26.