Sep 9 05:16:07.761991 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 9 05:16:07.762010 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Tue Sep 9 03:38:34 -00 2025 Sep 9 05:16:07.762020 kernel: KASLR enabled Sep 9 05:16:07.762026 kernel: efi: EFI v2.7 by EDK II Sep 9 05:16:07.762031 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb228018 ACPI 2.0=0xdb9b8018 RNG=0xdb9b8a18 MEMRESERVE=0xdb221f18 Sep 9 05:16:07.762037 kernel: random: crng init done Sep 9 05:16:07.762043 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Sep 9 05:16:07.762049 kernel: secureboot: Secure boot enabled Sep 9 05:16:07.762055 kernel: ACPI: Early table checksum verification disabled Sep 9 05:16:07.762062 kernel: ACPI: RSDP 0x00000000DB9B8018 000024 (v02 BOCHS ) Sep 9 05:16:07.762068 kernel: ACPI: XSDT 0x00000000DB9B8F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 9 05:16:07.762073 kernel: ACPI: FACP 0x00000000DB9B8B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:16:07.762079 kernel: ACPI: DSDT 0x00000000DB904018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:16:07.762084 kernel: ACPI: APIC 0x00000000DB9B8C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:16:07.762091 kernel: ACPI: PPTT 0x00000000DB9B8098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:16:07.762098 kernel: ACPI: GTDT 0x00000000DB9B8818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:16:07.762104 kernel: ACPI: MCFG 0x00000000DB9B8A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:16:07.762111 kernel: ACPI: SPCR 0x00000000DB9B8918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:16:07.762117 kernel: ACPI: DBG2 0x00000000DB9B8998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:16:07.762124 kernel: ACPI: IORT 0x00000000DB9B8198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:16:07.762130 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 9 05:16:07.762136 kernel: ACPI: Use ACPI SPCR as default console: No Sep 9 05:16:07.762143 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 05:16:07.762149 kernel: NODE_DATA(0) allocated [mem 0xdc737a00-0xdc73efff] Sep 9 05:16:07.762155 kernel: Zone ranges: Sep 9 05:16:07.762162 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 05:16:07.762169 kernel: DMA32 empty Sep 9 05:16:07.762175 kernel: Normal empty Sep 9 05:16:07.762181 kernel: Device empty Sep 9 05:16:07.762187 kernel: Movable zone start for each node Sep 9 05:16:07.762193 kernel: Early memory node ranges Sep 9 05:16:07.762199 kernel: node 0: [mem 0x0000000040000000-0x00000000dbb4ffff] Sep 9 05:16:07.762205 kernel: node 0: [mem 0x00000000dbb50000-0x00000000dbe7ffff] Sep 9 05:16:07.762211 kernel: node 0: [mem 0x00000000dbe80000-0x00000000dbe9ffff] Sep 9 05:16:07.762217 kernel: node 0: [mem 0x00000000dbea0000-0x00000000dbedffff] Sep 9 05:16:07.762223 kernel: node 0: [mem 0x00000000dbee0000-0x00000000dbf1ffff] Sep 9 05:16:07.762229 kernel: node 0: [mem 0x00000000dbf20000-0x00000000dbf6ffff] Sep 9 05:16:07.762236 kernel: node 0: [mem 0x00000000dbf70000-0x00000000dcbfffff] Sep 9 05:16:07.762242 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] Sep 9 05:16:07.762248 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 9 05:16:07.762256 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 05:16:07.762263 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 9 05:16:07.762269 kernel: cma: Reserved 16 MiB at 0x00000000d7a00000 on node -1 Sep 9 05:16:07.762275 kernel: psci: probing for conduit method from ACPI. Sep 9 05:16:07.762283 kernel: psci: PSCIv1.1 detected in firmware. Sep 9 05:16:07.762289 kernel: psci: Using standard PSCI v0.2 function IDs Sep 9 05:16:07.762296 kernel: psci: Trusted OS migration not required Sep 9 05:16:07.762302 kernel: psci: SMC Calling Convention v1.1 Sep 9 05:16:07.762309 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 9 05:16:07.762315 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 9 05:16:07.762322 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 9 05:16:07.762328 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 9 05:16:07.762335 kernel: Detected PIPT I-cache on CPU0 Sep 9 05:16:07.762342 kernel: CPU features: detected: GIC system register CPU interface Sep 9 05:16:07.762349 kernel: CPU features: detected: Spectre-v4 Sep 9 05:16:07.762355 kernel: CPU features: detected: Spectre-BHB Sep 9 05:16:07.762362 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 9 05:16:07.762369 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 9 05:16:07.762375 kernel: CPU features: detected: ARM erratum 1418040 Sep 9 05:16:07.762382 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 9 05:16:07.762388 kernel: alternatives: applying boot alternatives Sep 9 05:16:07.762395 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=1e9320fd787e27d01e3b8a1acb67e0c640346112c469b7a652e9dcfc9271bf90 Sep 9 05:16:07.762402 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 05:16:07.762409 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 05:16:07.762417 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 05:16:07.762424 kernel: Fallback order for Node 0: 0 Sep 9 05:16:07.762431 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 9 05:16:07.762437 kernel: Policy zone: DMA Sep 9 05:16:07.762443 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 05:16:07.762450 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 9 05:16:07.762457 kernel: software IO TLB: area num 4. Sep 9 05:16:07.762463 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 9 05:16:07.762470 kernel: software IO TLB: mapped [mem 0x00000000db504000-0x00000000db904000] (4MB) Sep 9 05:16:07.762476 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 9 05:16:07.762483 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 05:16:07.762490 kernel: rcu: RCU event tracing is enabled. Sep 9 05:16:07.762498 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 9 05:16:07.762504 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 05:16:07.762511 kernel: Tracing variant of Tasks RCU enabled. Sep 9 05:16:07.762517 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 05:16:07.762524 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 9 05:16:07.762530 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 05:16:07.762537 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 05:16:07.762544 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 9 05:16:07.762550 kernel: GICv3: 256 SPIs implemented Sep 9 05:16:07.762557 kernel: GICv3: 0 Extended SPIs implemented Sep 9 05:16:07.762563 kernel: Root IRQ handler: gic_handle_irq Sep 9 05:16:07.762570 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 9 05:16:07.762577 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 9 05:16:07.762583 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 9 05:16:07.762590 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 9 05:16:07.762596 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Sep 9 05:16:07.762602 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Sep 9 05:16:07.762609 kernel: GICv3: using LPI property table @0x0000000040130000 Sep 9 05:16:07.762615 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Sep 9 05:16:07.762622 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 05:16:07.762628 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 05:16:07.762634 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 9 05:16:07.762641 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 9 05:16:07.762649 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 9 05:16:07.762655 kernel: arm-pv: using stolen time PV Sep 9 05:16:07.762662 kernel: Console: colour dummy device 80x25 Sep 9 05:16:07.762668 kernel: ACPI: Core revision 20240827 Sep 9 05:16:07.762675 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 9 05:16:07.762681 kernel: pid_max: default: 32768 minimum: 301 Sep 9 05:16:07.762688 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 9 05:16:07.762694 kernel: landlock: Up and running. Sep 9 05:16:07.762701 kernel: SELinux: Initializing. Sep 9 05:16:07.762708 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 05:16:07.762715 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 05:16:07.762722 kernel: rcu: Hierarchical SRCU implementation. Sep 9 05:16:07.762728 kernel: rcu: Max phase no-delay instances is 400. Sep 9 05:16:07.762735 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 9 05:16:07.762742 kernel: Remapping and enabling EFI services. Sep 9 05:16:07.762748 kernel: smp: Bringing up secondary CPUs ... Sep 9 05:16:07.762754 kernel: Detected PIPT I-cache on CPU1 Sep 9 05:16:07.762771 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 9 05:16:07.762781 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Sep 9 05:16:07.762792 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 05:16:07.762799 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 9 05:16:07.762808 kernel: Detected PIPT I-cache on CPU2 Sep 9 05:16:07.762836 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 9 05:16:07.762844 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Sep 9 05:16:07.762850 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 05:16:07.762857 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 9 05:16:07.762864 kernel: Detected PIPT I-cache on CPU3 Sep 9 05:16:07.762873 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 9 05:16:07.762880 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Sep 9 05:16:07.762887 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 05:16:07.762894 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 9 05:16:07.762901 kernel: smp: Brought up 1 node, 4 CPUs Sep 9 05:16:07.762907 kernel: SMP: Total of 4 processors activated. Sep 9 05:16:07.762914 kernel: CPU: All CPU(s) started at EL1 Sep 9 05:16:07.762921 kernel: CPU features: detected: 32-bit EL0 Support Sep 9 05:16:07.762928 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 9 05:16:07.762936 kernel: CPU features: detected: Common not Private translations Sep 9 05:16:07.762943 kernel: CPU features: detected: CRC32 instructions Sep 9 05:16:07.762950 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 9 05:16:07.762957 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 9 05:16:07.762964 kernel: CPU features: detected: LSE atomic instructions Sep 9 05:16:07.762970 kernel: CPU features: detected: Privileged Access Never Sep 9 05:16:07.762978 kernel: CPU features: detected: RAS Extension Support Sep 9 05:16:07.762985 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 9 05:16:07.762992 kernel: alternatives: applying system-wide alternatives Sep 9 05:16:07.763000 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 9 05:16:07.763008 kernel: Memory: 2422372K/2572288K available (11136K kernel code, 2436K rwdata, 9060K rodata, 38976K init, 1038K bss, 127580K reserved, 16384K cma-reserved) Sep 9 05:16:07.763016 kernel: devtmpfs: initialized Sep 9 05:16:07.763023 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 05:16:07.763030 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 9 05:16:07.763037 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 9 05:16:07.763044 kernel: 0 pages in range for non-PLT usage Sep 9 05:16:07.763052 kernel: 508560 pages in range for PLT usage Sep 9 05:16:07.763059 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 05:16:07.763067 kernel: SMBIOS 3.0.0 present. Sep 9 05:16:07.763081 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 9 05:16:07.763088 kernel: DMI: Memory slots populated: 1/1 Sep 9 05:16:07.763095 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 05:16:07.763102 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 9 05:16:07.763109 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 9 05:16:07.763117 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 9 05:16:07.763124 kernel: audit: initializing netlink subsys (disabled) Sep 9 05:16:07.763132 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Sep 9 05:16:07.763141 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 05:16:07.763149 kernel: cpuidle: using governor menu Sep 9 05:16:07.763156 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 9 05:16:07.763163 kernel: ASID allocator initialised with 32768 entries Sep 9 05:16:07.763170 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 05:16:07.763178 kernel: Serial: AMBA PL011 UART driver Sep 9 05:16:07.763185 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 05:16:07.763193 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 05:16:07.763200 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 9 05:16:07.763209 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 9 05:16:07.763216 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 05:16:07.763223 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 05:16:07.763231 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 9 05:16:07.763238 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 9 05:16:07.763245 kernel: ACPI: Added _OSI(Module Device) Sep 9 05:16:07.763252 kernel: ACPI: Added _OSI(Processor Device) Sep 9 05:16:07.763259 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 05:16:07.763266 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 05:16:07.763275 kernel: ACPI: Interpreter enabled Sep 9 05:16:07.763282 kernel: ACPI: Using GIC for interrupt routing Sep 9 05:16:07.763289 kernel: ACPI: MCFG table detected, 1 entries Sep 9 05:16:07.763296 kernel: ACPI: CPU0 has been hot-added Sep 9 05:16:07.763303 kernel: ACPI: CPU1 has been hot-added Sep 9 05:16:07.763309 kernel: ACPI: CPU2 has been hot-added Sep 9 05:16:07.763316 kernel: ACPI: CPU3 has been hot-added Sep 9 05:16:07.763323 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 9 05:16:07.763330 kernel: printk: legacy console [ttyAMA0] enabled Sep 9 05:16:07.763338 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 05:16:07.763461 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 05:16:07.763537 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 9 05:16:07.763596 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 9 05:16:07.763652 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 9 05:16:07.763724 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 9 05:16:07.763733 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 9 05:16:07.763743 kernel: PCI host bridge to bus 0000:00 Sep 9 05:16:07.763832 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 9 05:16:07.763894 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 9 05:16:07.763947 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 9 05:16:07.763998 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 05:16:07.764073 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 9 05:16:07.764142 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 9 05:16:07.764204 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 9 05:16:07.764265 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 9 05:16:07.764324 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 9 05:16:07.764383 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 9 05:16:07.764442 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 9 05:16:07.764509 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 9 05:16:07.764566 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 9 05:16:07.764619 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 9 05:16:07.764675 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 9 05:16:07.764684 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 9 05:16:07.764691 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 9 05:16:07.764699 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 9 05:16:07.764709 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 9 05:16:07.764716 kernel: iommu: Default domain type: Translated Sep 9 05:16:07.764723 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 9 05:16:07.764732 kernel: efivars: Registered efivars operations Sep 9 05:16:07.764739 kernel: vgaarb: loaded Sep 9 05:16:07.764746 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 9 05:16:07.764752 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 05:16:07.764764 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 05:16:07.764771 kernel: pnp: PnP ACPI init Sep 9 05:16:07.764871 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 9 05:16:07.764882 kernel: pnp: PnP ACPI: found 1 devices Sep 9 05:16:07.764892 kernel: NET: Registered PF_INET protocol family Sep 9 05:16:07.764899 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 05:16:07.764906 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 9 05:16:07.764914 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 05:16:07.764921 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 05:16:07.764929 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 9 05:16:07.764936 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 9 05:16:07.764944 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 05:16:07.764952 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 05:16:07.764960 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 05:16:07.764967 kernel: PCI: CLS 0 bytes, default 64 Sep 9 05:16:07.764975 kernel: kvm [1]: HYP mode not available Sep 9 05:16:07.764982 kernel: Initialise system trusted keyrings Sep 9 05:16:07.764990 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 9 05:16:07.764997 kernel: Key type asymmetric registered Sep 9 05:16:07.765005 kernel: Asymmetric key parser 'x509' registered Sep 9 05:16:07.765012 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 9 05:16:07.765019 kernel: io scheduler mq-deadline registered Sep 9 05:16:07.765027 kernel: io scheduler kyber registered Sep 9 05:16:07.765034 kernel: io scheduler bfq registered Sep 9 05:16:07.765041 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 9 05:16:07.765048 kernel: ACPI: button: Power Button [PWRB] Sep 9 05:16:07.765055 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 9 05:16:07.765118 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 9 05:16:07.765127 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 05:16:07.765134 kernel: thunder_xcv, ver 1.0 Sep 9 05:16:07.765142 kernel: thunder_bgx, ver 1.0 Sep 9 05:16:07.765150 kernel: nicpf, ver 1.0 Sep 9 05:16:07.765158 kernel: nicvf, ver 1.0 Sep 9 05:16:07.765224 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 9 05:16:07.765281 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-09T05:16:07 UTC (1757394967) Sep 9 05:16:07.765290 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 9 05:16:07.765298 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 9 05:16:07.765305 kernel: watchdog: NMI not fully supported Sep 9 05:16:07.765311 kernel: watchdog: Hard watchdog permanently disabled Sep 9 05:16:07.765320 kernel: NET: Registered PF_INET6 protocol family Sep 9 05:16:07.765327 kernel: Segment Routing with IPv6 Sep 9 05:16:07.765334 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 05:16:07.765341 kernel: NET: Registered PF_PACKET protocol family Sep 9 05:16:07.765348 kernel: Key type dns_resolver registered Sep 9 05:16:07.765355 kernel: registered taskstats version 1 Sep 9 05:16:07.765362 kernel: Loading compiled-in X.509 certificates Sep 9 05:16:07.765369 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: 44d1e8b5c5ffbaa3cedd99c03d41580671fabec5' Sep 9 05:16:07.765376 kernel: Demotion targets for Node 0: null Sep 9 05:16:07.765384 kernel: Key type .fscrypt registered Sep 9 05:16:07.765391 kernel: Key type fscrypt-provisioning registered Sep 9 05:16:07.765398 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 05:16:07.765405 kernel: ima: Allocated hash algorithm: sha1 Sep 9 05:16:07.765412 kernel: ima: No architecture policies found Sep 9 05:16:07.765419 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 9 05:16:07.765425 kernel: clk: Disabling unused clocks Sep 9 05:16:07.765432 kernel: PM: genpd: Disabling unused power domains Sep 9 05:16:07.765439 kernel: Warning: unable to open an initial console. Sep 9 05:16:07.765448 kernel: Freeing unused kernel memory: 38976K Sep 9 05:16:07.765454 kernel: Run /init as init process Sep 9 05:16:07.765461 kernel: with arguments: Sep 9 05:16:07.765468 kernel: /init Sep 9 05:16:07.765474 kernel: with environment: Sep 9 05:16:07.765481 kernel: HOME=/ Sep 9 05:16:07.765488 kernel: TERM=linux Sep 9 05:16:07.765495 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 05:16:07.765502 systemd[1]: Successfully made /usr/ read-only. Sep 9 05:16:07.765513 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 05:16:07.765521 systemd[1]: Detected virtualization kvm. Sep 9 05:16:07.765528 systemd[1]: Detected architecture arm64. Sep 9 05:16:07.765535 systemd[1]: Running in initrd. Sep 9 05:16:07.765542 systemd[1]: No hostname configured, using default hostname. Sep 9 05:16:07.765550 systemd[1]: Hostname set to . Sep 9 05:16:07.765558 systemd[1]: Initializing machine ID from VM UUID. Sep 9 05:16:07.765567 systemd[1]: Queued start job for default target initrd.target. Sep 9 05:16:07.765575 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 05:16:07.765583 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 05:16:07.765591 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 05:16:07.765599 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 05:16:07.765607 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 05:16:07.765615 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 05:16:07.765625 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 05:16:07.765633 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 05:16:07.765640 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 05:16:07.765648 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 05:16:07.765655 systemd[1]: Reached target paths.target - Path Units. Sep 9 05:16:07.765662 systemd[1]: Reached target slices.target - Slice Units. Sep 9 05:16:07.765670 systemd[1]: Reached target swap.target - Swaps. Sep 9 05:16:07.765678 systemd[1]: Reached target timers.target - Timer Units. Sep 9 05:16:07.765686 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 05:16:07.765694 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 05:16:07.765701 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 05:16:07.765709 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 9 05:16:07.765716 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 05:16:07.765724 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 05:16:07.765731 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 05:16:07.765739 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 05:16:07.765746 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 05:16:07.765756 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 05:16:07.765771 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 05:16:07.765779 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 9 05:16:07.765787 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 05:16:07.765794 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 05:16:07.765802 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 05:16:07.765810 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:16:07.765829 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 05:16:07.765844 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 05:16:07.765852 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 05:16:07.765860 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 05:16:07.765885 systemd-journald[246]: Collecting audit messages is disabled. Sep 9 05:16:07.765904 systemd-journald[246]: Journal started Sep 9 05:16:07.765922 systemd-journald[246]: Runtime Journal (/run/log/journal/3008dab22bc44c4196539cc79c04d20f) is 6M, max 48.5M, 42.4M free. Sep 9 05:16:07.770127 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 05:16:07.757594 systemd-modules-load[247]: Inserted module 'overlay' Sep 9 05:16:07.771858 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:16:07.772277 systemd-modules-load[247]: Inserted module 'br_netfilter' Sep 9 05:16:07.774781 kernel: Bridge firewalling registered Sep 9 05:16:07.774798 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 05:16:07.776026 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 05:16:07.777995 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 05:16:07.781498 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 05:16:07.783202 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 05:16:07.785318 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 05:16:07.794633 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 05:16:07.801794 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 05:16:07.803860 systemd-tmpfiles[271]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 9 05:16:07.805898 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 05:16:07.807038 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 05:16:07.810335 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 05:16:07.812170 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 05:16:07.813971 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 05:16:07.832342 dracut-cmdline[289]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=1e9320fd787e27d01e3b8a1acb67e0c640346112c469b7a652e9dcfc9271bf90 Sep 9 05:16:07.845408 systemd-resolved[286]: Positive Trust Anchors: Sep 9 05:16:07.845425 systemd-resolved[286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 05:16:07.845456 systemd-resolved[286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 05:16:07.850060 systemd-resolved[286]: Defaulting to hostname 'linux'. Sep 9 05:16:07.850940 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 05:16:07.853666 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 05:16:07.901843 kernel: SCSI subsystem initialized Sep 9 05:16:07.906833 kernel: Loading iSCSI transport class v2.0-870. Sep 9 05:16:07.913840 kernel: iscsi: registered transport (tcp) Sep 9 05:16:07.926851 kernel: iscsi: registered transport (qla4xxx) Sep 9 05:16:07.926866 kernel: QLogic iSCSI HBA Driver Sep 9 05:16:07.941829 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 05:16:07.960199 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 05:16:07.961767 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 05:16:08.004593 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 05:16:08.006798 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 05:16:08.062839 kernel: raid6: neonx8 gen() 15098 MB/s Sep 9 05:16:08.079841 kernel: raid6: neonx4 gen() 15845 MB/s Sep 9 05:16:08.096833 kernel: raid6: neonx2 gen() 13281 MB/s Sep 9 05:16:08.113828 kernel: raid6: neonx1 gen() 10483 MB/s Sep 9 05:16:08.130835 kernel: raid6: int64x8 gen() 6893 MB/s Sep 9 05:16:08.147829 kernel: raid6: int64x4 gen() 7341 MB/s Sep 9 05:16:08.164838 kernel: raid6: int64x2 gen() 6090 MB/s Sep 9 05:16:08.181835 kernel: raid6: int64x1 gen() 5052 MB/s Sep 9 05:16:08.181857 kernel: raid6: using algorithm neonx4 gen() 15845 MB/s Sep 9 05:16:08.198835 kernel: raid6: .... xor() 12292 MB/s, rmw enabled Sep 9 05:16:08.198849 kernel: raid6: using neon recovery algorithm Sep 9 05:16:08.203892 kernel: xor: measuring software checksum speed Sep 9 05:16:08.203912 kernel: 8regs : 21618 MB/sec Sep 9 05:16:08.204924 kernel: 32regs : 21658 MB/sec Sep 9 05:16:08.204938 kernel: arm64_neon : 28032 MB/sec Sep 9 05:16:08.204948 kernel: xor: using function: arm64_neon (28032 MB/sec) Sep 9 05:16:08.256843 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 05:16:08.262701 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 05:16:08.266240 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 05:16:08.295738 systemd-udevd[498]: Using default interface naming scheme 'v255'. Sep 9 05:16:08.299735 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 05:16:08.302193 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 05:16:08.336716 dracut-pre-trigger[507]: rd.md=0: removing MD RAID activation Sep 9 05:16:08.357084 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 05:16:08.359174 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 05:16:08.408517 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 05:16:08.411006 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 05:16:08.458622 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 9 05:16:08.458810 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 9 05:16:08.471936 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 05:16:08.476726 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 05:16:08.476745 kernel: GPT:9289727 != 19775487 Sep 9 05:16:08.476765 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 05:16:08.476776 kernel: GPT:9289727 != 19775487 Sep 9 05:16:08.476784 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 05:16:08.476793 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 05:16:08.471998 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:16:08.474674 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:16:08.478258 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:16:08.503804 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 9 05:16:08.509371 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:16:08.510682 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 05:16:08.518667 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 9 05:16:08.527714 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 05:16:08.533925 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 9 05:16:08.535114 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 9 05:16:08.538156 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 05:16:08.540368 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 05:16:08.542446 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 05:16:08.545022 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 05:16:08.546792 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 05:16:08.563173 disk-uuid[592]: Primary Header is updated. Sep 9 05:16:08.563173 disk-uuid[592]: Secondary Entries is updated. Sep 9 05:16:08.563173 disk-uuid[592]: Secondary Header is updated. Sep 9 05:16:08.567854 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 05:16:08.568558 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 05:16:09.573846 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 05:16:09.574544 disk-uuid[595]: The operation has completed successfully. Sep 9 05:16:09.598664 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 05:16:09.599776 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 05:16:09.626323 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 05:16:09.656775 sh[612]: Success Sep 9 05:16:09.669175 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 05:16:09.669215 kernel: device-mapper: uevent: version 1.0.3 Sep 9 05:16:09.669236 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 9 05:16:09.675985 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 9 05:16:09.697501 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 05:16:09.707320 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 05:16:09.709666 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 05:16:09.718963 kernel: BTRFS: device fsid 72a0ff35-b4e8-4772-9a8d-d0e90c3fb364 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (624) Sep 9 05:16:09.718995 kernel: BTRFS info (device dm-0): first mount of filesystem 72a0ff35-b4e8-4772-9a8d-d0e90c3fb364 Sep 9 05:16:09.719013 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 9 05:16:09.723837 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 05:16:09.723862 kernel: BTRFS info (device dm-0): enabling free space tree Sep 9 05:16:09.724408 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 05:16:09.725653 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 9 05:16:09.726991 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 05:16:09.727726 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 05:16:09.729318 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 05:16:09.750294 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (655) Sep 9 05:16:09.750328 kernel: BTRFS info (device vda6): first mount of filesystem ea68277c-dabb-41e9-9258-b2fe475f0ae6 Sep 9 05:16:09.750339 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 05:16:09.753014 kernel: BTRFS info (device vda6): turning on async discard Sep 9 05:16:09.753045 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 05:16:09.756844 kernel: BTRFS info (device vda6): last unmount of filesystem ea68277c-dabb-41e9-9258-b2fe475f0ae6 Sep 9 05:16:09.759863 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 05:16:09.763041 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 05:16:09.821195 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 05:16:09.824476 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 05:16:09.858649 ignition[706]: Ignition 2.22.0 Sep 9 05:16:09.858664 ignition[706]: Stage: fetch-offline Sep 9 05:16:09.859133 systemd-networkd[803]: lo: Link UP Sep 9 05:16:09.858692 ignition[706]: no configs at "/usr/lib/ignition/base.d" Sep 9 05:16:09.859136 systemd-networkd[803]: lo: Gained carrier Sep 9 05:16:09.858700 ignition[706]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 05:16:09.859783 systemd-networkd[803]: Enumeration completed Sep 9 05:16:09.858782 ignition[706]: parsed url from cmdline: "" Sep 9 05:16:09.859873 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 05:16:09.858786 ignition[706]: no config URL provided Sep 9 05:16:09.860155 systemd-networkd[803]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 05:16:09.858790 ignition[706]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 05:16:09.860159 systemd-networkd[803]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 05:16:09.858798 ignition[706]: no config at "/usr/lib/ignition/user.ign" Sep 9 05:16:09.860726 systemd-networkd[803]: eth0: Link UP Sep 9 05:16:09.858830 ignition[706]: op(1): [started] loading QEMU firmware config module Sep 9 05:16:09.860894 systemd-networkd[803]: eth0: Gained carrier Sep 9 05:16:09.858834 ignition[706]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 9 05:16:09.860903 systemd-networkd[803]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 05:16:09.868772 ignition[706]: op(1): [finished] loading QEMU firmware config module Sep 9 05:16:09.861928 systemd[1]: Reached target network.target - Network. Sep 9 05:16:09.874861 systemd-networkd[803]: eth0: DHCPv4 address 10.0.0.147/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 05:16:09.917586 ignition[706]: parsing config with SHA512: 16f29e3e7f367d5aa1521e42c24b120d43674791a0f0440df052ecf51497f4219d7061bfaa205682b9238984f62dde305027f49db930a084246bfe4361c2e9d1 Sep 9 05:16:09.923486 unknown[706]: fetched base config from "system" Sep 9 05:16:09.923498 unknown[706]: fetched user config from "qemu" Sep 9 05:16:09.924735 ignition[706]: fetch-offline: fetch-offline passed Sep 9 05:16:09.924805 ignition[706]: Ignition finished successfully Sep 9 05:16:09.927892 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 05:16:09.929159 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 9 05:16:09.929944 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 05:16:09.961602 ignition[810]: Ignition 2.22.0 Sep 9 05:16:09.961623 ignition[810]: Stage: kargs Sep 9 05:16:09.961743 ignition[810]: no configs at "/usr/lib/ignition/base.d" Sep 9 05:16:09.961763 ignition[810]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 05:16:09.962504 ignition[810]: kargs: kargs passed Sep 9 05:16:09.962540 ignition[810]: Ignition finished successfully Sep 9 05:16:09.967335 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 05:16:09.969422 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 05:16:10.000166 ignition[819]: Ignition 2.22.0 Sep 9 05:16:10.000185 ignition[819]: Stage: disks Sep 9 05:16:10.000306 ignition[819]: no configs at "/usr/lib/ignition/base.d" Sep 9 05:16:10.000314 ignition[819]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 05:16:10.001095 ignition[819]: disks: disks passed Sep 9 05:16:10.003370 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 05:16:10.001137 ignition[819]: Ignition finished successfully Sep 9 05:16:10.004978 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 05:16:10.006702 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 05:16:10.008482 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 05:16:10.010321 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 05:16:10.012308 systemd[1]: Reached target basic.target - Basic System. Sep 9 05:16:10.014693 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 05:16:10.038604 systemd-fsck[831]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 9 05:16:10.043249 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 05:16:10.049514 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 05:16:10.109842 kernel: EXT4-fs (vda9): mounted filesystem 88574756-967d-44b3-be66-46689c8baf27 r/w with ordered data mode. Quota mode: none. Sep 9 05:16:10.110588 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 05:16:10.111873 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 05:16:10.115048 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 05:16:10.117483 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 05:16:10.119359 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 9 05:16:10.120920 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 05:16:10.120943 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 05:16:10.132307 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 05:16:10.134627 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 05:16:10.139091 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (839) Sep 9 05:16:10.139113 kernel: BTRFS info (device vda6): first mount of filesystem ea68277c-dabb-41e9-9258-b2fe475f0ae6 Sep 9 05:16:10.139123 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 05:16:10.141385 kernel: BTRFS info (device vda6): turning on async discard Sep 9 05:16:10.141427 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 05:16:10.142375 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 05:16:10.169230 initrd-setup-root[864]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 05:16:10.173294 initrd-setup-root[871]: cut: /sysroot/etc/group: No such file or directory Sep 9 05:16:10.176081 initrd-setup-root[878]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 05:16:10.178643 initrd-setup-root[885]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 05:16:10.241781 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 05:16:10.245048 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 05:16:10.246573 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 05:16:10.264856 kernel: BTRFS info (device vda6): last unmount of filesystem ea68277c-dabb-41e9-9258-b2fe475f0ae6 Sep 9 05:16:10.274956 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 05:16:10.289039 ignition[954]: INFO : Ignition 2.22.0 Sep 9 05:16:10.289039 ignition[954]: INFO : Stage: mount Sep 9 05:16:10.290312 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 05:16:10.290312 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 05:16:10.290312 ignition[954]: INFO : mount: mount passed Sep 9 05:16:10.290312 ignition[954]: INFO : Ignition finished successfully Sep 9 05:16:10.293382 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 05:16:10.295378 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 05:16:10.837885 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 05:16:10.839408 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 05:16:10.861843 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (966) Sep 9 05:16:10.863855 kernel: BTRFS info (device vda6): first mount of filesystem ea68277c-dabb-41e9-9258-b2fe475f0ae6 Sep 9 05:16:10.863906 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 05:16:10.866351 kernel: BTRFS info (device vda6): turning on async discard Sep 9 05:16:10.866380 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 05:16:10.867670 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 05:16:10.897472 ignition[983]: INFO : Ignition 2.22.0 Sep 9 05:16:10.897472 ignition[983]: INFO : Stage: files Sep 9 05:16:10.899068 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 05:16:10.899068 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 05:16:10.899068 ignition[983]: DEBUG : files: compiled without relabeling support, skipping Sep 9 05:16:10.901802 ignition[983]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 05:16:10.901802 ignition[983]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 05:16:10.904108 ignition[983]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 05:16:10.904108 ignition[983]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 05:16:10.904108 ignition[983]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 05:16:10.903808 unknown[983]: wrote ssh authorized keys file for user: core Sep 9 05:16:10.908162 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 9 05:16:10.908162 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 9 05:16:10.957851 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 05:16:10.967944 systemd-networkd[803]: eth0: Gained IPv6LL Sep 9 05:16:11.084853 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 9 05:16:11.087056 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 05:16:11.087056 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 9 05:16:11.273948 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 9 05:16:11.358858 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 05:16:11.360869 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 9 05:16:11.360869 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 05:16:11.360869 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 05:16:11.360869 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 05:16:11.360869 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 05:16:11.360869 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 05:16:11.360869 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 05:16:11.360869 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 05:16:11.375107 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 05:16:11.375107 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 05:16:11.375107 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 9 05:16:11.375107 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 9 05:16:11.375107 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 9 05:16:11.375107 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 9 05:16:11.658691 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 9 05:16:11.979204 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 9 05:16:11.979204 ignition[983]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 9 05:16:11.983145 ignition[983]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 05:16:11.983145 ignition[983]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 05:16:11.983145 ignition[983]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 9 05:16:11.983145 ignition[983]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 9 05:16:11.983145 ignition[983]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 05:16:11.983145 ignition[983]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 05:16:11.983145 ignition[983]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 9 05:16:11.983145 ignition[983]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 9 05:16:12.000270 ignition[983]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 05:16:12.003411 ignition[983]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 05:16:12.004944 ignition[983]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 9 05:16:12.004944 ignition[983]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 9 05:16:12.004944 ignition[983]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 05:16:12.009288 ignition[983]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 05:16:12.009288 ignition[983]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 05:16:12.009288 ignition[983]: INFO : files: files passed Sep 9 05:16:12.009288 ignition[983]: INFO : Ignition finished successfully Sep 9 05:16:12.009912 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 05:16:12.012574 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 05:16:12.015462 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 05:16:12.035576 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 05:16:12.036867 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 05:16:12.039075 initrd-setup-root-after-ignition[1011]: grep: /sysroot/oem/oem-release: No such file or directory Sep 9 05:16:12.040687 initrd-setup-root-after-ignition[1014]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 05:16:12.040687 initrd-setup-root-after-ignition[1014]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 05:16:12.043663 initrd-setup-root-after-ignition[1018]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 05:16:12.044156 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 05:16:12.046694 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 05:16:12.049461 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 05:16:12.093624 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 05:16:12.093728 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 05:16:12.097112 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 05:16:12.099085 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 05:16:12.101396 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 05:16:12.102115 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 05:16:12.123788 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 05:16:12.126147 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 05:16:12.149550 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 05:16:12.150907 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 05:16:12.153585 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 05:16:12.155464 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 05:16:12.155583 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 05:16:12.159443 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 05:16:12.160562 systemd[1]: Stopped target basic.target - Basic System. Sep 9 05:16:12.163060 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 05:16:12.165523 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 05:16:12.167925 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 05:16:12.170028 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 9 05:16:12.172396 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 05:16:12.174658 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 05:16:12.177468 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 05:16:12.179874 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 05:16:12.182219 systemd[1]: Stopped target swap.target - Swaps. Sep 9 05:16:12.183568 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 05:16:12.183690 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 05:16:12.186207 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 05:16:12.188132 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 05:16:12.190042 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 05:16:12.190149 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 05:16:12.192006 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 05:16:12.192113 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 05:16:12.194540 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 05:16:12.194651 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 05:16:12.197221 systemd[1]: Stopped target paths.target - Path Units. Sep 9 05:16:12.198746 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 05:16:12.201928 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 05:16:12.203938 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 05:16:12.205811 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 05:16:12.208024 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 05:16:12.208103 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 05:16:12.209687 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 05:16:12.209770 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 05:16:12.211416 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 05:16:12.211528 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 05:16:12.212970 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 05:16:12.213064 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 05:16:12.215584 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 05:16:12.217203 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 05:16:12.218261 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 05:16:12.218385 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 05:16:12.220500 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 05:16:12.220595 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 05:16:12.226169 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 05:16:12.229982 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 05:16:12.237839 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 05:16:12.243674 ignition[1038]: INFO : Ignition 2.22.0 Sep 9 05:16:12.243674 ignition[1038]: INFO : Stage: umount Sep 9 05:16:12.245395 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 05:16:12.245395 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 05:16:12.245395 ignition[1038]: INFO : umount: umount passed Sep 9 05:16:12.245395 ignition[1038]: INFO : Ignition finished successfully Sep 9 05:16:12.247490 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 05:16:12.247622 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 05:16:12.248985 systemd[1]: Stopped target network.target - Network. Sep 9 05:16:12.251145 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 05:16:12.251202 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 05:16:12.252812 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 05:16:12.252892 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 05:16:12.255002 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 05:16:12.255051 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 05:16:12.256699 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 05:16:12.256750 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 05:16:12.258054 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 05:16:12.259871 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 05:16:12.268918 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 05:16:12.269019 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 05:16:12.272844 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 9 05:16:12.273050 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 05:16:12.273160 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 05:16:12.279348 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 9 05:16:12.279982 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 9 05:16:12.281399 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 05:16:12.281448 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 05:16:12.284256 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 05:16:12.285268 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 05:16:12.285323 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 05:16:12.287335 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 05:16:12.287380 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 05:16:12.290102 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 05:16:12.290144 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 05:16:12.292155 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 05:16:12.292202 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 05:16:12.295963 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 05:16:12.298611 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 05:16:12.298676 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 9 05:16:12.312997 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 05:16:12.313102 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 05:16:12.315067 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 05:16:12.315159 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 05:16:12.316773 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 05:16:12.316895 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 05:16:12.319698 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 05:16:12.319759 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 05:16:12.321479 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 05:16:12.321512 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 05:16:12.323625 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 05:16:12.323673 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 05:16:12.326531 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 05:16:12.326579 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 05:16:12.329295 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 05:16:12.329347 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 05:16:12.332158 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 05:16:12.332208 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 05:16:12.334651 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 05:16:12.335808 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 9 05:16:12.335870 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 05:16:12.339082 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 05:16:12.339123 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 05:16:12.342109 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 9 05:16:12.342153 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 05:16:12.345426 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 05:16:12.345466 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 05:16:12.348177 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 05:16:12.348236 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:16:12.352108 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 9 05:16:12.352157 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Sep 9 05:16:12.352184 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 9 05:16:12.352214 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 05:16:12.352460 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 05:16:12.353850 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 05:16:12.355468 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 05:16:12.357592 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 05:16:12.376475 systemd[1]: Switching root. Sep 9 05:16:12.407767 systemd-journald[246]: Journal stopped Sep 9 05:16:13.156834 systemd-journald[246]: Received SIGTERM from PID 1 (systemd). Sep 9 05:16:13.156885 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 05:16:13.156901 kernel: SELinux: policy capability open_perms=1 Sep 9 05:16:13.156910 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 05:16:13.156920 kernel: SELinux: policy capability always_check_network=0 Sep 9 05:16:13.156931 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 05:16:13.156942 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 05:16:13.156953 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 05:16:13.156962 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 05:16:13.156972 kernel: SELinux: policy capability userspace_initial_context=0 Sep 9 05:16:13.156981 kernel: audit: type=1403 audit(1757394972.599:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 05:16:13.156994 systemd[1]: Successfully loaded SELinux policy in 56.388ms. Sep 9 05:16:13.157022 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.199ms. Sep 9 05:16:13.157032 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 05:16:13.157043 systemd[1]: Detected virtualization kvm. Sep 9 05:16:13.157054 systemd[1]: Detected architecture arm64. Sep 9 05:16:13.157063 systemd[1]: Detected first boot. Sep 9 05:16:13.157073 systemd[1]: Initializing machine ID from VM UUID. Sep 9 05:16:13.157083 zram_generator::config[1083]: No configuration found. Sep 9 05:16:13.157093 kernel: NET: Registered PF_VSOCK protocol family Sep 9 05:16:13.157103 systemd[1]: Populated /etc with preset unit settings. Sep 9 05:16:13.157117 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 9 05:16:13.157127 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 05:16:13.157136 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 05:16:13.157148 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 05:16:13.157160 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 05:16:13.157170 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 05:16:13.157180 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 05:16:13.157189 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 05:16:13.157199 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 05:16:13.157209 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 05:16:13.157218 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 05:16:13.157229 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 05:16:13.157240 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 05:16:13.157251 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 05:16:13.157261 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 05:16:13.157271 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 05:16:13.157281 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 05:16:13.157291 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 05:16:13.157301 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 9 05:16:13.157310 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 05:16:13.157322 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 05:16:13.157331 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 05:16:13.157341 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 05:16:13.157350 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 05:16:13.157360 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 05:16:13.157370 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 05:16:13.157380 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 05:16:13.157390 systemd[1]: Reached target slices.target - Slice Units. Sep 9 05:16:13.157401 systemd[1]: Reached target swap.target - Swaps. Sep 9 05:16:13.157411 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 05:16:13.157420 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 05:16:13.157430 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 9 05:16:13.157439 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 05:16:13.157450 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 05:16:13.157460 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 05:16:13.157470 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 05:16:13.157480 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 05:16:13.157491 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 05:16:13.157501 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 05:16:13.157511 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 05:16:13.157521 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 05:16:13.157530 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 05:16:13.157541 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 05:16:13.157551 systemd[1]: Reached target machines.target - Containers. Sep 9 05:16:13.157561 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 05:16:13.157570 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 05:16:13.157582 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 05:16:13.157592 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 05:16:13.157601 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 05:16:13.157611 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 05:16:13.157620 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 05:16:13.157631 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 05:16:13.157640 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 05:16:13.157650 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 05:16:13.157662 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 05:16:13.157672 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 05:16:13.157682 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 05:16:13.157692 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 05:16:13.157702 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 05:16:13.157711 kernel: fuse: init (API version 7.41) Sep 9 05:16:13.157720 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 05:16:13.157730 kernel: loop: module loaded Sep 9 05:16:13.157748 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 05:16:13.157762 kernel: ACPI: bus type drm_connector registered Sep 9 05:16:13.157772 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 05:16:13.157782 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 05:16:13.157792 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 9 05:16:13.157802 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 05:16:13.157828 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 05:16:13.157842 systemd[1]: Stopped verity-setup.service. Sep 9 05:16:13.157881 systemd-journald[1158]: Collecting audit messages is disabled. Sep 9 05:16:13.157902 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 05:16:13.157914 systemd-journald[1158]: Journal started Sep 9 05:16:13.157934 systemd-journald[1158]: Runtime Journal (/run/log/journal/3008dab22bc44c4196539cc79c04d20f) is 6M, max 48.5M, 42.4M free. Sep 9 05:16:12.946004 systemd[1]: Queued start job for default target multi-user.target. Sep 9 05:16:12.966689 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 9 05:16:12.967076 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 05:16:13.162848 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 05:16:13.162923 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 05:16:13.163314 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 05:16:13.164418 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 05:16:13.165751 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 05:16:13.167016 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 05:16:13.168210 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 05:16:13.169569 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 05:16:13.171084 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 05:16:13.171247 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 05:16:13.172558 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 05:16:13.172716 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 05:16:13.174031 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 05:16:13.174188 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 05:16:13.175383 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 05:16:13.175534 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 05:16:13.177269 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 05:16:13.177436 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 05:16:13.178748 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 05:16:13.178953 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 05:16:13.180153 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 05:16:13.181545 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 05:16:13.183014 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 05:16:13.184534 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 9 05:16:13.195380 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 05:16:13.197649 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 05:16:13.199619 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 05:16:13.200620 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 05:16:13.200647 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 05:16:13.202306 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 9 05:16:13.209515 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 05:16:13.210507 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 05:16:13.211789 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 05:16:13.213684 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 05:16:13.214712 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 05:16:13.216095 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 05:16:13.216949 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 05:16:13.217762 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 05:16:13.222966 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 05:16:13.225308 systemd-journald[1158]: Time spent on flushing to /var/log/journal/3008dab22bc44c4196539cc79c04d20f is 16.491ms for 891 entries. Sep 9 05:16:13.225308 systemd-journald[1158]: System Journal (/var/log/journal/3008dab22bc44c4196539cc79c04d20f) is 8M, max 195.6M, 187.6M free. Sep 9 05:16:13.254304 systemd-journald[1158]: Received client request to flush runtime journal. Sep 9 05:16:13.254351 kernel: loop0: detected capacity change from 0 to 100632 Sep 9 05:16:13.254368 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 05:16:13.225029 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 05:16:13.228341 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 05:16:13.232455 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 05:16:13.233794 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 05:16:13.235294 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 05:16:13.237911 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 05:16:13.240955 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 9 05:16:13.252220 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 05:16:13.257773 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 05:16:13.265170 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. Sep 9 05:16:13.265187 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. Sep 9 05:16:13.267789 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 9 05:16:13.269403 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 05:16:13.272468 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 05:16:13.277848 kernel: loop1: detected capacity change from 0 to 119368 Sep 9 05:16:13.298203 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 05:16:13.298842 kernel: loop2: detected capacity change from 0 to 211168 Sep 9 05:16:13.301978 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 05:16:13.322876 kernel: loop3: detected capacity change from 0 to 100632 Sep 9 05:16:13.326245 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. Sep 9 05:16:13.326259 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. Sep 9 05:16:13.329926 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 05:16:13.331827 kernel: loop4: detected capacity change from 0 to 119368 Sep 9 05:16:13.336842 kernel: loop5: detected capacity change from 0 to 211168 Sep 9 05:16:13.340843 (sd-merge)[1224]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 9 05:16:13.341447 (sd-merge)[1224]: Merged extensions into '/usr'. Sep 9 05:16:13.344839 systemd[1]: Reload requested from client PID 1199 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 05:16:13.344856 systemd[1]: Reloading... Sep 9 05:16:13.409864 zram_generator::config[1251]: No configuration found. Sep 9 05:16:13.475721 ldconfig[1194]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 05:16:13.547185 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 05:16:13.547362 systemd[1]: Reloading finished in 202 ms. Sep 9 05:16:13.563193 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 05:16:13.564482 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 05:16:13.574971 systemd[1]: Starting ensure-sysext.service... Sep 9 05:16:13.576593 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 05:16:13.598450 systemd[1]: Reload requested from client PID 1286 ('systemctl') (unit ensure-sysext.service)... Sep 9 05:16:13.598467 systemd[1]: Reloading... Sep 9 05:16:13.601128 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 9 05:16:13.601166 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 9 05:16:13.601387 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 05:16:13.601570 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 05:16:13.602279 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 05:16:13.602480 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Sep 9 05:16:13.602525 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Sep 9 05:16:13.605175 systemd-tmpfiles[1287]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 05:16:13.605189 systemd-tmpfiles[1287]: Skipping /boot Sep 9 05:16:13.610972 systemd-tmpfiles[1287]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 05:16:13.610987 systemd-tmpfiles[1287]: Skipping /boot Sep 9 05:16:13.648835 zram_generator::config[1314]: No configuration found. Sep 9 05:16:13.775768 systemd[1]: Reloading finished in 177 ms. Sep 9 05:16:13.795167 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 05:16:13.800489 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 05:16:13.814761 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 05:16:13.816892 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 05:16:13.818843 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 05:16:13.821992 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 05:16:13.824968 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 05:16:13.829957 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 05:16:13.835636 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 05:16:13.838883 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 05:16:13.841183 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 05:16:13.843414 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 05:16:13.844478 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 05:16:13.844585 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 05:16:13.847521 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 05:16:13.849390 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 05:16:13.851283 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 05:16:13.851426 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 05:16:13.861864 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 05:16:13.865209 systemd-udevd[1360]: Using default interface naming scheme 'v255'. Sep 9 05:16:13.865426 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 05:16:13.865571 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 05:16:13.867566 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 05:16:13.867723 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 05:16:13.872327 augenrules[1381]: No rules Sep 9 05:16:13.872454 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 05:16:13.874437 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 05:16:13.875863 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 05:16:13.881834 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 05:16:13.886237 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 05:16:13.887614 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 05:16:13.889130 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 05:16:13.894018 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 05:16:13.903453 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 05:16:13.906028 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 05:16:13.906897 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 05:16:13.906946 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 05:16:13.909023 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 05:16:13.913665 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 05:16:13.914533 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 05:16:13.914965 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 05:16:13.917248 systemd[1]: Finished ensure-sysext.service. Sep 9 05:16:13.919178 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 05:16:13.919322 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 05:16:13.921354 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 05:16:13.921491 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 05:16:13.924554 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 05:16:13.926208 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 05:16:13.928315 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 05:16:13.929499 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 05:16:13.929635 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 05:16:13.942944 augenrules[1404]: /sbin/augenrules: No change Sep 9 05:16:13.943930 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 05:16:13.943995 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 05:16:13.947298 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 9 05:16:13.959131 augenrules[1455]: No rules Sep 9 05:16:13.960825 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 05:16:13.961019 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 05:16:13.976577 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 9 05:16:14.011560 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 05:16:14.015284 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 05:16:14.043365 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 05:16:14.073716 systemd-networkd[1427]: lo: Link UP Sep 9 05:16:14.073727 systemd-networkd[1427]: lo: Gained carrier Sep 9 05:16:14.074469 systemd-networkd[1427]: Enumeration completed Sep 9 05:16:14.074569 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 05:16:14.076908 systemd-resolved[1353]: Positive Trust Anchors: Sep 9 05:16:14.076924 systemd-resolved[1353]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 05:16:14.076955 systemd-resolved[1353]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 05:16:14.077011 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 9 05:16:14.077247 systemd-networkd[1427]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 05:16:14.077250 systemd-networkd[1427]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 05:16:14.077724 systemd-networkd[1427]: eth0: Link UP Sep 9 05:16:14.077851 systemd-networkd[1427]: eth0: Gained carrier Sep 9 05:16:14.077864 systemd-networkd[1427]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 05:16:14.080796 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 05:16:14.084740 systemd-resolved[1353]: Defaulting to hostname 'linux'. Sep 9 05:16:14.086589 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 05:16:14.088826 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 9 05:16:14.090175 systemd[1]: Reached target network.target - Network. Sep 9 05:16:14.091467 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 05:16:14.093890 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 05:16:14.095057 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 05:16:14.096377 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 05:16:14.098351 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 05:16:14.100977 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 05:16:14.101017 systemd[1]: Reached target paths.target - Path Units. Sep 9 05:16:14.101997 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 05:16:14.103159 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 05:16:14.104883 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 05:16:14.106690 systemd[1]: Reached target timers.target - Timer Units. Sep 9 05:16:14.108552 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 05:16:14.112175 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 05:16:14.114868 systemd-networkd[1427]: eth0: DHCPv4 address 10.0.0.147/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 05:16:14.115890 systemd-timesyncd[1452]: Network configuration changed, trying to establish connection. Sep 9 05:16:14.116023 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 9 05:16:14.117447 systemd-timesyncd[1452]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 9 05:16:14.117552 systemd-timesyncd[1452]: Initial clock synchronization to Tue 2025-09-09 05:16:14.278034 UTC. Sep 9 05:16:14.119111 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 9 05:16:14.120439 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 9 05:16:14.128463 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 05:16:14.130075 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 9 05:16:14.133838 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 9 05:16:14.135253 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 05:16:14.143536 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 05:16:14.144591 systemd[1]: Reached target basic.target - Basic System. Sep 9 05:16:14.145644 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 05:16:14.145677 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 05:16:14.146628 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 05:16:14.148612 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 05:16:14.154166 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 05:16:14.156315 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 05:16:14.158462 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 05:16:14.159634 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 05:16:14.161391 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 05:16:14.162996 jq[1494]: false Sep 9 05:16:14.164104 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 05:16:14.167945 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 05:16:14.170008 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 05:16:14.171146 extend-filesystems[1495]: Found /dev/vda6 Sep 9 05:16:14.173751 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 05:16:14.175459 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:16:14.177313 extend-filesystems[1495]: Found /dev/vda9 Sep 9 05:16:14.177329 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 05:16:14.177787 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 05:16:14.178701 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 05:16:14.180152 extend-filesystems[1495]: Checking size of /dev/vda9 Sep 9 05:16:14.180546 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 05:16:14.186583 jq[1513]: true Sep 9 05:16:14.190874 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 05:16:14.193127 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 05:16:14.193572 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 05:16:14.193937 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 05:16:14.195911 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 05:16:14.197168 extend-filesystems[1495]: Resized partition /dev/vda9 Sep 9 05:16:14.198981 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 05:16:14.199442 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 05:16:14.203685 extend-filesystems[1525]: resize2fs 1.47.3 (8-Jul-2025) Sep 9 05:16:14.214863 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 9 05:16:14.218093 (ntainerd)[1527]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 05:16:14.224255 update_engine[1512]: I20250909 05:16:14.223606 1512 main.cc:92] Flatcar Update Engine starting Sep 9 05:16:14.225421 jq[1526]: true Sep 9 05:16:14.236325 tar[1524]: linux-arm64/LICENSE Sep 9 05:16:14.236325 tar[1524]: linux-arm64/helm Sep 9 05:16:14.241028 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:16:14.257864 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 9 05:16:14.269127 extend-filesystems[1525]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 9 05:16:14.269127 extend-filesystems[1525]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 9 05:16:14.269127 extend-filesystems[1525]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 9 05:16:14.278382 extend-filesystems[1495]: Resized filesystem in /dev/vda9 Sep 9 05:16:14.275442 dbus-daemon[1492]: [system] SELinux support is enabled Sep 9 05:16:14.270361 systemd-logind[1509]: Watching system buttons on /dev/input/event0 (Power Button) Sep 9 05:16:14.285868 update_engine[1512]: I20250909 05:16:14.279611 1512 update_check_scheduler.cc:74] Next update check in 7m45s Sep 9 05:16:14.270582 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 05:16:14.271110 systemd-logind[1509]: New seat seat0. Sep 9 05:16:14.280396 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 05:16:14.282151 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 05:16:14.283268 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 05:16:14.287233 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 05:16:14.287723 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 05:16:14.289132 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 05:16:14.289150 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 05:16:14.291042 systemd[1]: Started update-engine.service - Update Engine. Sep 9 05:16:14.292713 dbus-daemon[1492]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 9 05:16:14.294294 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 05:16:14.313534 bash[1563]: Updated "/home/core/.ssh/authorized_keys" Sep 9 05:16:14.310904 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 05:16:14.313405 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 9 05:16:14.345933 locksmithd[1553]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 05:16:14.407499 containerd[1527]: time="2025-09-09T05:16:14Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 9 05:16:14.408213 containerd[1527]: time="2025-09-09T05:16:14.408164640Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 9 05:16:14.420217 containerd[1527]: time="2025-09-09T05:16:14.420094880Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.16µs" Sep 9 05:16:14.420217 containerd[1527]: time="2025-09-09T05:16:14.420131200Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 9 05:16:14.420217 containerd[1527]: time="2025-09-09T05:16:14.420149600Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 9 05:16:14.420651 containerd[1527]: time="2025-09-09T05:16:14.420627760Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 9 05:16:14.420784 containerd[1527]: time="2025-09-09T05:16:14.420767400Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 9 05:16:14.420979 containerd[1527]: time="2025-09-09T05:16:14.420960760Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 05:16:14.421222 containerd[1527]: time="2025-09-09T05:16:14.421198600Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 05:16:14.421337 containerd[1527]: time="2025-09-09T05:16:14.421319360Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 05:16:14.421769 containerd[1527]: time="2025-09-09T05:16:14.421745640Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 05:16:14.421922 containerd[1527]: time="2025-09-09T05:16:14.421894680Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 05:16:14.422723 containerd[1527]: time="2025-09-09T05:16:14.422125120Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 05:16:14.422723 containerd[1527]: time="2025-09-09T05:16:14.422144960Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 9 05:16:14.422723 containerd[1527]: time="2025-09-09T05:16:14.422241000Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 9 05:16:14.422723 containerd[1527]: time="2025-09-09T05:16:14.422439280Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 05:16:14.422723 containerd[1527]: time="2025-09-09T05:16:14.422470120Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 05:16:14.422723 containerd[1527]: time="2025-09-09T05:16:14.422479880Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 9 05:16:14.422723 containerd[1527]: time="2025-09-09T05:16:14.422520880Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 9 05:16:14.422911 containerd[1527]: time="2025-09-09T05:16:14.422755440Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 9 05:16:14.422911 containerd[1527]: time="2025-09-09T05:16:14.422869280Z" level=info msg="metadata content store policy set" policy=shared Sep 9 05:16:14.425863 containerd[1527]: time="2025-09-09T05:16:14.425828520Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 9 05:16:14.425924 containerd[1527]: time="2025-09-09T05:16:14.425886080Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 9 05:16:14.425924 containerd[1527]: time="2025-09-09T05:16:14.425900120Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 9 05:16:14.425924 containerd[1527]: time="2025-09-09T05:16:14.425912960Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 9 05:16:14.425987 containerd[1527]: time="2025-09-09T05:16:14.425928800Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 9 05:16:14.425987 containerd[1527]: time="2025-09-09T05:16:14.425940640Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 9 05:16:14.425987 containerd[1527]: time="2025-09-09T05:16:14.425951920Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 9 05:16:14.425987 containerd[1527]: time="2025-09-09T05:16:14.425963120Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 9 05:16:14.425987 containerd[1527]: time="2025-09-09T05:16:14.425974280Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 9 05:16:14.425987 containerd[1527]: time="2025-09-09T05:16:14.425986040Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 9 05:16:14.426082 containerd[1527]: time="2025-09-09T05:16:14.425995520Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 9 05:16:14.426082 containerd[1527]: time="2025-09-09T05:16:14.426007080Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 9 05:16:14.426228 containerd[1527]: time="2025-09-09T05:16:14.426118240Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 9 05:16:14.426228 containerd[1527]: time="2025-09-09T05:16:14.426148520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 9 05:16:14.426228 containerd[1527]: time="2025-09-09T05:16:14.426165440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 9 05:16:14.426228 containerd[1527]: time="2025-09-09T05:16:14.426176520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 9 05:16:14.426228 containerd[1527]: time="2025-09-09T05:16:14.426188120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 9 05:16:14.426228 containerd[1527]: time="2025-09-09T05:16:14.426198720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 9 05:16:14.426228 containerd[1527]: time="2025-09-09T05:16:14.426209640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 9 05:16:14.426228 containerd[1527]: time="2025-09-09T05:16:14.426220480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 9 05:16:14.426228 containerd[1527]: time="2025-09-09T05:16:14.426232720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 9 05:16:14.426460 containerd[1527]: time="2025-09-09T05:16:14.426244280Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 9 05:16:14.426460 containerd[1527]: time="2025-09-09T05:16:14.426255400Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 9 05:16:14.426497 containerd[1527]: time="2025-09-09T05:16:14.426480600Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 9 05:16:14.426515 containerd[1527]: time="2025-09-09T05:16:14.426497920Z" level=info msg="Start snapshots syncer" Sep 9 05:16:14.426537 containerd[1527]: time="2025-09-09T05:16:14.426526640Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 9 05:16:14.426851 containerd[1527]: time="2025-09-09T05:16:14.426740320Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 9 05:16:14.426851 containerd[1527]: time="2025-09-09T05:16:14.426795520Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 9 05:16:14.426983 containerd[1527]: time="2025-09-09T05:16:14.426881080Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 9 05:16:14.427004 containerd[1527]: time="2025-09-09T05:16:14.426985760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 9 05:16:14.427022 containerd[1527]: time="2025-09-09T05:16:14.427011160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 9 05:16:14.427040 containerd[1527]: time="2025-09-09T05:16:14.427023520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 9 05:16:14.427040 containerd[1527]: time="2025-09-09T05:16:14.427034600Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 9 05:16:14.427072 containerd[1527]: time="2025-09-09T05:16:14.427046520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 9 05:16:14.427072 containerd[1527]: time="2025-09-09T05:16:14.427058680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 9 05:16:14.427072 containerd[1527]: time="2025-09-09T05:16:14.427069240Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 9 05:16:14.427124 containerd[1527]: time="2025-09-09T05:16:14.427091960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 9 05:16:14.427124 containerd[1527]: time="2025-09-09T05:16:14.427102800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 9 05:16:14.427124 containerd[1527]: time="2025-09-09T05:16:14.427112840Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 9 05:16:14.427171 containerd[1527]: time="2025-09-09T05:16:14.427144640Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 05:16:14.427171 containerd[1527]: time="2025-09-09T05:16:14.427157520Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 05:16:14.427171 containerd[1527]: time="2025-09-09T05:16:14.427166000Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 05:16:14.427221 containerd[1527]: time="2025-09-09T05:16:14.427174920Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 05:16:14.427221 containerd[1527]: time="2025-09-09T05:16:14.427182760Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 9 05:16:14.427221 containerd[1527]: time="2025-09-09T05:16:14.427192040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 9 05:16:14.427221 containerd[1527]: time="2025-09-09T05:16:14.427201520Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 9 05:16:14.427282 containerd[1527]: time="2025-09-09T05:16:14.427275880Z" level=info msg="runtime interface created" Sep 9 05:16:14.427282 containerd[1527]: time="2025-09-09T05:16:14.427280920Z" level=info msg="created NRI interface" Sep 9 05:16:14.427317 containerd[1527]: time="2025-09-09T05:16:14.427288960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 9 05:16:14.427317 containerd[1527]: time="2025-09-09T05:16:14.427300400Z" level=info msg="Connect containerd service" Sep 9 05:16:14.427350 containerd[1527]: time="2025-09-09T05:16:14.427326080Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 05:16:14.428072 containerd[1527]: time="2025-09-09T05:16:14.428012760Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 05:16:14.498546 containerd[1527]: time="2025-09-09T05:16:14.498496880Z" level=info msg="Start subscribing containerd event" Sep 9 05:16:14.498762 containerd[1527]: time="2025-09-09T05:16:14.498724160Z" level=info msg="Start recovering state" Sep 9 05:16:14.499007 containerd[1527]: time="2025-09-09T05:16:14.498961680Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 05:16:14.499048 containerd[1527]: time="2025-09-09T05:16:14.499031800Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 05:16:14.499148 containerd[1527]: time="2025-09-09T05:16:14.499130600Z" level=info msg="Start event monitor" Sep 9 05:16:14.499268 containerd[1527]: time="2025-09-09T05:16:14.499206240Z" level=info msg="Start cni network conf syncer for default" Sep 9 05:16:14.499268 containerd[1527]: time="2025-09-09T05:16:14.499219000Z" level=info msg="Start streaming server" Sep 9 05:16:14.499268 containerd[1527]: time="2025-09-09T05:16:14.499229160Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 9 05:16:14.499268 containerd[1527]: time="2025-09-09T05:16:14.499236720Z" level=info msg="runtime interface starting up..." Sep 9 05:16:14.499268 containerd[1527]: time="2025-09-09T05:16:14.499242440Z" level=info msg="starting plugins..." Sep 9 05:16:14.499693 containerd[1527]: time="2025-09-09T05:16:14.499561840Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 9 05:16:14.499956 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 05:16:14.501476 containerd[1527]: time="2025-09-09T05:16:14.500885480Z" level=info msg="containerd successfully booted in 0.093799s" Sep 9 05:16:14.538429 tar[1524]: linux-arm64/README.md Sep 9 05:16:14.554787 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 05:16:15.319960 systemd-networkd[1427]: eth0: Gained IPv6LL Sep 9 05:16:15.322267 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 05:16:15.324460 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 05:16:15.327703 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 9 05:16:15.330404 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:16:15.342063 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 05:16:15.355491 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 9 05:16:15.355942 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 9 05:16:15.358301 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 05:16:15.370033 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 05:16:15.787604 sshd_keygen[1519]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 05:16:15.808964 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 05:16:15.811696 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 05:16:15.833354 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 05:16:15.833565 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 05:16:15.836184 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 05:16:15.854305 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 05:16:15.859163 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 05:16:15.861176 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 9 05:16:15.862430 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 05:16:15.888695 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:16:15.890307 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 05:16:15.893448 (kubelet)[1629]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 05:16:15.894939 systemd[1]: Startup finished in 1.986s (kernel) + 4.995s (initrd) + 3.351s (userspace) = 10.334s. Sep 9 05:16:16.236000 kubelet[1629]: E0909 05:16:16.235867 1629 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 05:16:16.238418 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 05:16:16.238574 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 05:16:16.238919 systemd[1]: kubelet.service: Consumed 737ms CPU time, 257.1M memory peak. Sep 9 05:16:20.132118 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 05:16:20.133375 systemd[1]: Started sshd@0-10.0.0.147:22-10.0.0.1:50574.service - OpenSSH per-connection server daemon (10.0.0.1:50574). Sep 9 05:16:20.237450 sshd[1642]: Accepted publickey for core from 10.0.0.1 port 50574 ssh2: RSA SHA256:BZm90Ok3j8HCXtlwShuWuMQDPsEE0kFrFWmP82ap/wE Sep 9 05:16:20.239196 sshd-session[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:16:20.245332 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 05:16:20.246447 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 05:16:20.251912 systemd-logind[1509]: New session 1 of user core. Sep 9 05:16:20.273557 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 05:16:20.276118 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 05:16:20.295712 (systemd)[1647]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 05:16:20.299334 systemd-logind[1509]: New session c1 of user core. Sep 9 05:16:20.407664 systemd[1647]: Queued start job for default target default.target. Sep 9 05:16:20.419712 systemd[1647]: Created slice app.slice - User Application Slice. Sep 9 05:16:20.419743 systemd[1647]: Reached target paths.target - Paths. Sep 9 05:16:20.419780 systemd[1647]: Reached target timers.target - Timers. Sep 9 05:16:20.420937 systemd[1647]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 05:16:20.429856 systemd[1647]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 05:16:20.429911 systemd[1647]: Reached target sockets.target - Sockets. Sep 9 05:16:20.429943 systemd[1647]: Reached target basic.target - Basic System. Sep 9 05:16:20.429969 systemd[1647]: Reached target default.target - Main User Target. Sep 9 05:16:20.429992 systemd[1647]: Startup finished in 125ms. Sep 9 05:16:20.430107 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 05:16:20.431295 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 05:16:20.497914 systemd[1]: Started sshd@1-10.0.0.147:22-10.0.0.1:50582.service - OpenSSH per-connection server daemon (10.0.0.1:50582). Sep 9 05:16:20.562692 sshd[1658]: Accepted publickey for core from 10.0.0.1 port 50582 ssh2: RSA SHA256:BZm90Ok3j8HCXtlwShuWuMQDPsEE0kFrFWmP82ap/wE Sep 9 05:16:20.563921 sshd-session[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:16:20.567450 systemd-logind[1509]: New session 2 of user core. Sep 9 05:16:20.577993 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 05:16:20.629166 sshd[1661]: Connection closed by 10.0.0.1 port 50582 Sep 9 05:16:20.629615 sshd-session[1658]: pam_unix(sshd:session): session closed for user core Sep 9 05:16:20.646960 systemd[1]: sshd@1-10.0.0.147:22-10.0.0.1:50582.service: Deactivated successfully. Sep 9 05:16:20.648283 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 05:16:20.650937 systemd-logind[1509]: Session 2 logged out. Waiting for processes to exit. Sep 9 05:16:20.651876 systemd[1]: Started sshd@2-10.0.0.147:22-10.0.0.1:50598.service - OpenSSH per-connection server daemon (10.0.0.1:50598). Sep 9 05:16:20.653087 systemd-logind[1509]: Removed session 2. Sep 9 05:16:20.715600 sshd[1667]: Accepted publickey for core from 10.0.0.1 port 50598 ssh2: RSA SHA256:BZm90Ok3j8HCXtlwShuWuMQDPsEE0kFrFWmP82ap/wE Sep 9 05:16:20.716740 sshd-session[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:16:20.720907 systemd-logind[1509]: New session 3 of user core. Sep 9 05:16:20.732022 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 05:16:20.781672 sshd[1670]: Connection closed by 10.0.0.1 port 50598 Sep 9 05:16:20.782061 sshd-session[1667]: pam_unix(sshd:session): session closed for user core Sep 9 05:16:20.792732 systemd[1]: sshd@2-10.0.0.147:22-10.0.0.1:50598.service: Deactivated successfully. Sep 9 05:16:20.794047 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 05:16:20.794678 systemd-logind[1509]: Session 3 logged out. Waiting for processes to exit. Sep 9 05:16:20.796666 systemd[1]: Started sshd@3-10.0.0.147:22-10.0.0.1:50600.service - OpenSSH per-connection server daemon (10.0.0.1:50600). Sep 9 05:16:20.797577 systemd-logind[1509]: Removed session 3. Sep 9 05:16:20.849228 sshd[1676]: Accepted publickey for core from 10.0.0.1 port 50600 ssh2: RSA SHA256:BZm90Ok3j8HCXtlwShuWuMQDPsEE0kFrFWmP82ap/wE Sep 9 05:16:20.850444 sshd-session[1676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:16:20.854797 systemd-logind[1509]: New session 4 of user core. Sep 9 05:16:20.866001 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 05:16:20.917388 sshd[1680]: Connection closed by 10.0.0.1 port 50600 Sep 9 05:16:20.917850 sshd-session[1676]: pam_unix(sshd:session): session closed for user core Sep 9 05:16:20.928641 systemd[1]: sshd@3-10.0.0.147:22-10.0.0.1:50600.service: Deactivated successfully. Sep 9 05:16:20.931056 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 05:16:20.931710 systemd-logind[1509]: Session 4 logged out. Waiting for processes to exit. Sep 9 05:16:20.933780 systemd[1]: Started sshd@4-10.0.0.147:22-10.0.0.1:50606.service - OpenSSH per-connection server daemon (10.0.0.1:50606). Sep 9 05:16:20.934706 systemd-logind[1509]: Removed session 4. Sep 9 05:16:20.984191 sshd[1686]: Accepted publickey for core from 10.0.0.1 port 50606 ssh2: RSA SHA256:BZm90Ok3j8HCXtlwShuWuMQDPsEE0kFrFWmP82ap/wE Sep 9 05:16:20.985051 sshd-session[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:16:20.989881 systemd-logind[1509]: New session 5 of user core. Sep 9 05:16:21.003994 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 05:16:21.063895 sudo[1690]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 05:16:21.064627 sudo[1690]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 05:16:21.077808 sudo[1690]: pam_unix(sudo:session): session closed for user root Sep 9 05:16:21.079328 sshd[1689]: Connection closed by 10.0.0.1 port 50606 Sep 9 05:16:21.079870 sshd-session[1686]: pam_unix(sshd:session): session closed for user core Sep 9 05:16:21.093972 systemd[1]: sshd@4-10.0.0.147:22-10.0.0.1:50606.service: Deactivated successfully. Sep 9 05:16:21.096290 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 05:16:21.098012 systemd-logind[1509]: Session 5 logged out. Waiting for processes to exit. Sep 9 05:16:21.100017 systemd[1]: Started sshd@5-10.0.0.147:22-10.0.0.1:50616.service - OpenSSH per-connection server daemon (10.0.0.1:50616). Sep 9 05:16:21.101471 systemd-logind[1509]: Removed session 5. Sep 9 05:16:21.158351 sshd[1696]: Accepted publickey for core from 10.0.0.1 port 50616 ssh2: RSA SHA256:BZm90Ok3j8HCXtlwShuWuMQDPsEE0kFrFWmP82ap/wE Sep 9 05:16:21.160448 sshd-session[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:16:21.165891 systemd-logind[1509]: New session 6 of user core. Sep 9 05:16:21.174031 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 05:16:21.226510 sudo[1701]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 05:16:21.226779 sudo[1701]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 05:16:21.309717 sudo[1701]: pam_unix(sudo:session): session closed for user root Sep 9 05:16:21.315081 sudo[1700]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 9 05:16:21.315362 sudo[1700]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 05:16:21.327764 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 05:16:21.370695 augenrules[1723]: No rules Sep 9 05:16:21.371901 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 05:16:21.372914 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 05:16:21.373944 sudo[1700]: pam_unix(sudo:session): session closed for user root Sep 9 05:16:21.376448 sshd[1699]: Connection closed by 10.0.0.1 port 50616 Sep 9 05:16:21.376253 sshd-session[1696]: pam_unix(sshd:session): session closed for user core Sep 9 05:16:21.389769 systemd[1]: sshd@5-10.0.0.147:22-10.0.0.1:50616.service: Deactivated successfully. Sep 9 05:16:21.393349 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 05:16:21.394194 systemd-logind[1509]: Session 6 logged out. Waiting for processes to exit. Sep 9 05:16:21.399587 systemd[1]: Started sshd@6-10.0.0.147:22-10.0.0.1:50624.service - OpenSSH per-connection server daemon (10.0.0.1:50624). Sep 9 05:16:21.400309 systemd-logind[1509]: Removed session 6. Sep 9 05:16:21.457462 sshd[1732]: Accepted publickey for core from 10.0.0.1 port 50624 ssh2: RSA SHA256:BZm90Ok3j8HCXtlwShuWuMQDPsEE0kFrFWmP82ap/wE Sep 9 05:16:21.460794 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:16:21.464922 systemd-logind[1509]: New session 7 of user core. Sep 9 05:16:21.485015 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 05:16:21.536948 sudo[1736]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 05:16:21.537236 sudo[1736]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 05:16:21.820847 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 05:16:21.844267 (dockerd)[1756]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 05:16:22.043097 dockerd[1756]: time="2025-09-09T05:16:22.043030334Z" level=info msg="Starting up" Sep 9 05:16:22.043944 dockerd[1756]: time="2025-09-09T05:16:22.043911508Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 9 05:16:22.054700 dockerd[1756]: time="2025-09-09T05:16:22.054656054Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 9 05:16:22.087935 dockerd[1756]: time="2025-09-09T05:16:22.087803683Z" level=info msg="Loading containers: start." Sep 9 05:16:22.095859 kernel: Initializing XFRM netlink socket Sep 9 05:16:22.284898 systemd-networkd[1427]: docker0: Link UP Sep 9 05:16:22.288252 dockerd[1756]: time="2025-09-09T05:16:22.288207937Z" level=info msg="Loading containers: done." Sep 9 05:16:22.300081 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1059307940-merged.mount: Deactivated successfully. Sep 9 05:16:22.301238 dockerd[1756]: time="2025-09-09T05:16:22.301169469Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 05:16:22.301308 dockerd[1756]: time="2025-09-09T05:16:22.301286827Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 9 05:16:22.301404 dockerd[1756]: time="2025-09-09T05:16:22.301383221Z" level=info msg="Initializing buildkit" Sep 9 05:16:22.324360 dockerd[1756]: time="2025-09-09T05:16:22.324308309Z" level=info msg="Completed buildkit initialization" Sep 9 05:16:22.329518 dockerd[1756]: time="2025-09-09T05:16:22.329472393Z" level=info msg="Daemon has completed initialization" Sep 9 05:16:22.329742 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 05:16:22.330510 dockerd[1756]: time="2025-09-09T05:16:22.329609949Z" level=info msg="API listen on /run/docker.sock" Sep 9 05:16:22.984725 containerd[1527]: time="2025-09-09T05:16:22.984687150Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 9 05:16:23.567598 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1009527080.mount: Deactivated successfully. Sep 9 05:16:24.766786 containerd[1527]: time="2025-09-09T05:16:24.766730098Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:16:24.767446 containerd[1527]: time="2025-09-09T05:16:24.767411629Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=27352615" Sep 9 05:16:24.768093 containerd[1527]: time="2025-09-09T05:16:24.768064305Z" level=info msg="ImageCreate event name:\"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:16:24.771733 containerd[1527]: time="2025-09-09T05:16:24.770634289Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:16:24.771795 containerd[1527]: time="2025-09-09T05:16:24.771722821Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"27349413\" in 1.786993364s" Sep 9 05:16:24.771795 containerd[1527]: time="2025-09-09T05:16:24.771763586Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\"" Sep 9 05:16:24.773244 containerd[1527]: time="2025-09-09T05:16:24.773158397Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 9 05:16:26.066319 containerd[1527]: time="2025-09-09T05:16:26.066261398Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:16:26.066814 containerd[1527]: time="2025-09-09T05:16:26.066779784Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=23536979" Sep 9 05:16:26.067699 containerd[1527]: time="2025-09-09T05:16:26.067670924Z" level=info msg="ImageCreate event name:\"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:16:26.070579 containerd[1527]: time="2025-09-09T05:16:26.070546596Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:16:26.071479 containerd[1527]: time="2025-09-09T05:16:26.071442317Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"25093155\" in 1.298248803s" Sep 9 05:16:26.071523 containerd[1527]: time="2025-09-09T05:16:26.071477157Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\"" Sep 9 05:16:26.072004 containerd[1527]: time="2025-09-09T05:16:26.071916860Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 9 05:16:26.489602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 05:16:26.491075 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:16:26.661301 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:16:26.665403 (kubelet)[2037]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 05:16:26.702494 kubelet[2037]: E0909 05:16:26.702445 2037 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 05:16:26.705687 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 05:16:26.705814 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 05:16:26.707904 systemd[1]: kubelet.service: Consumed 146ms CPU time, 107.8M memory peak. Sep 9 05:16:27.302941 containerd[1527]: time="2025-09-09T05:16:27.302882808Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:16:27.303430 containerd[1527]: time="2025-09-09T05:16:27.303400614Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=18292016" Sep 9 05:16:27.304309 containerd[1527]: time="2025-09-09T05:16:27.304281924Z" level=info msg="ImageCreate event name:\"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:16:27.306754 containerd[1527]: time="2025-09-09T05:16:27.306718781Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:16:27.307850 containerd[1527]: time="2025-09-09T05:16:27.307635232Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"19848210\" in 1.235676946s" Sep 9 05:16:27.307850 containerd[1527]: time="2025-09-09T05:16:27.307670615Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\"" Sep 9 05:16:27.308101 containerd[1527]: time="2025-09-09T05:16:27.308078578Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 9 05:16:28.216291 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3460917597.mount: Deactivated successfully. Sep 9 05:16:28.446915 containerd[1527]: time="2025-09-09T05:16:28.446659547Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=28199961" Sep 9 05:16:28.446915 containerd[1527]: time="2025-09-09T05:16:28.446750026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:16:28.447752 containerd[1527]: time="2025-09-09T05:16:28.447702987Z" level=info msg="ImageCreate event name:\"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:16:28.449664 containerd[1527]: time="2025-09-09T05:16:28.449616334Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:16:28.450269 containerd[1527]: time="2025-09-09T05:16:28.450098715Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"28198978\" in 1.141866001s" Sep 9 05:16:28.450269 containerd[1527]: time="2025-09-09T05:16:28.450136368Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\"" Sep 9 05:16:28.450563 containerd[1527]: time="2025-09-09T05:16:28.450541196Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 9 05:16:29.050201 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount69739868.mount: Deactivated successfully. Sep 9 05:16:29.977472 containerd[1527]: time="2025-09-09T05:16:29.977417392Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:16:29.977912 containerd[1527]: time="2025-09-09T05:16:29.977883471Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Sep 9 05:16:29.978658 containerd[1527]: time="2025-09-09T05:16:29.978630176Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:16:29.981258 containerd[1527]: time="2025-09-09T05:16:29.981215557Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:16:29.982334 containerd[1527]: time="2025-09-09T05:16:29.982280605Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.531708985s" Sep 9 05:16:29.982383 containerd[1527]: time="2025-09-09T05:16:29.982340430Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Sep 9 05:16:29.982812 containerd[1527]: time="2025-09-09T05:16:29.982790178Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 05:16:30.390034 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3605904691.mount: Deactivated successfully. Sep 9 05:16:30.394430 containerd[1527]: time="2025-09-09T05:16:30.394392246Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 05:16:30.394878 containerd[1527]: time="2025-09-09T05:16:30.394828625Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 9 05:16:30.396026 containerd[1527]: time="2025-09-09T05:16:30.395697613Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 05:16:30.397678 containerd[1527]: time="2025-09-09T05:16:30.397653899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 05:16:30.398316 containerd[1527]: time="2025-09-09T05:16:30.398288655Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 415.453661ms" Sep 9 05:16:30.398373 containerd[1527]: time="2025-09-09T05:16:30.398322306Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 9 05:16:30.398933 containerd[1527]: time="2025-09-09T05:16:30.398902794Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 9 05:16:30.806848 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount657923442.mount: Deactivated successfully. Sep 9 05:16:32.628017 containerd[1527]: time="2025-09-09T05:16:32.627966855Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:16:32.628906 containerd[1527]: time="2025-09-09T05:16:32.628459074Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465297" Sep 9 05:16:32.629587 containerd[1527]: time="2025-09-09T05:16:32.629546204Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:16:32.632724 containerd[1527]: time="2025-09-09T05:16:32.632690313Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:16:32.634448 containerd[1527]: time="2025-09-09T05:16:32.634415004Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.235480329s" Sep 9 05:16:32.634640 containerd[1527]: time="2025-09-09T05:16:32.634539101Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Sep 9 05:16:36.956205 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 9 05:16:36.957551 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:16:37.118787 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:16:37.137120 (kubelet)[2201]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 05:16:37.167188 kubelet[2201]: E0909 05:16:37.167131 2201 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 05:16:37.169956 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 05:16:37.170081 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 05:16:37.170472 systemd[1]: kubelet.service: Consumed 126ms CPU time, 107M memory peak. Sep 9 05:16:37.246583 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:16:37.246732 systemd[1]: kubelet.service: Consumed 126ms CPU time, 107M memory peak. Sep 9 05:16:37.248932 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:16:37.268511 systemd[1]: Reload requested from client PID 2215 ('systemctl') (unit session-7.scope)... Sep 9 05:16:37.268525 systemd[1]: Reloading... Sep 9 05:16:37.345949 zram_generator::config[2258]: No configuration found. Sep 9 05:16:37.588749 systemd[1]: Reloading finished in 319 ms. Sep 9 05:16:37.654316 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:16:37.656887 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 05:16:37.657091 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:16:37.657148 systemd[1]: kubelet.service: Consumed 92ms CPU time, 95M memory peak. Sep 9 05:16:37.658632 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:16:37.777066 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:16:37.780803 (kubelet)[2305]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 05:16:37.818839 kubelet[2305]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 05:16:37.818839 kubelet[2305]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 05:16:37.818839 kubelet[2305]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 05:16:37.818839 kubelet[2305]: I0909 05:16:37.817755 2305 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 05:16:38.341362 kubelet[2305]: I0909 05:16:38.341323 2305 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 9 05:16:38.341362 kubelet[2305]: I0909 05:16:38.341356 2305 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 05:16:38.341754 kubelet[2305]: I0909 05:16:38.341740 2305 server.go:956] "Client rotation is on, will bootstrap in background" Sep 9 05:16:38.362978 kubelet[2305]: E0909 05:16:38.362918 2305 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.147:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.147:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 9 05:16:38.366368 kubelet[2305]: I0909 05:16:38.366328 2305 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 05:16:38.373748 kubelet[2305]: I0909 05:16:38.373722 2305 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 05:16:38.376549 kubelet[2305]: I0909 05:16:38.376522 2305 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 05:16:38.377607 kubelet[2305]: I0909 05:16:38.377547 2305 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 05:16:38.377755 kubelet[2305]: I0909 05:16:38.377596 2305 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 05:16:38.377874 kubelet[2305]: I0909 05:16:38.377838 2305 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 05:16:38.377874 kubelet[2305]: I0909 05:16:38.377848 2305 container_manager_linux.go:303] "Creating device plugin manager" Sep 9 05:16:38.378088 kubelet[2305]: I0909 05:16:38.378058 2305 state_mem.go:36] "Initialized new in-memory state store" Sep 9 05:16:38.380912 kubelet[2305]: I0909 05:16:38.380869 2305 kubelet.go:480] "Attempting to sync node with API server" Sep 9 05:16:38.380912 kubelet[2305]: I0909 05:16:38.380908 2305 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 05:16:38.380990 kubelet[2305]: I0909 05:16:38.380931 2305 kubelet.go:386] "Adding apiserver pod source" Sep 9 05:16:38.382146 kubelet[2305]: I0909 05:16:38.381995 2305 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 05:16:38.383405 kubelet[2305]: E0909 05:16:38.383331 2305 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.147:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.147:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 9 05:16:38.383663 kubelet[2305]: I0909 05:16:38.383597 2305 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 05:16:38.385245 kubelet[2305]: I0909 05:16:38.385201 2305 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 9 05:16:38.385245 kubelet[2305]: E0909 05:16:38.385225 2305 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.147:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.147:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 9 05:16:38.385349 kubelet[2305]: W0909 05:16:38.385334 2305 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 05:16:38.388280 kubelet[2305]: I0909 05:16:38.388240 2305 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 05:16:38.388348 kubelet[2305]: I0909 05:16:38.388295 2305 server.go:1289] "Started kubelet" Sep 9 05:16:38.388838 kubelet[2305]: I0909 05:16:38.388702 2305 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 05:16:38.390520 kubelet[2305]: I0909 05:16:38.390462 2305 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 05:16:38.390976 kubelet[2305]: I0909 05:16:38.390958 2305 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 05:16:38.396074 kubelet[2305]: I0909 05:16:38.396034 2305 server.go:317] "Adding debug handlers to kubelet server" Sep 9 05:16:38.396164 kubelet[2305]: I0909 05:16:38.396091 2305 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 05:16:38.396843 kubelet[2305]: I0909 05:16:38.396725 2305 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 05:16:38.398367 kubelet[2305]: E0909 05:16:38.396680 2305 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.147:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.147:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18638570a0fafe76 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 05:16:38.38826047 +0000 UTC m=+0.604239351,LastTimestamp:2025-09-09 05:16:38.38826047 +0000 UTC m=+0.604239351,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 05:16:38.399018 kubelet[2305]: E0909 05:16:38.398880 2305 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:16:38.399097 kubelet[2305]: I0909 05:16:38.399058 2305 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 05:16:38.399316 kubelet[2305]: I0909 05:16:38.399276 2305 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 05:16:38.399393 kubelet[2305]: I0909 05:16:38.399378 2305 reconciler.go:26] "Reconciler: start to sync state" Sep 9 05:16:38.399879 kubelet[2305]: E0909 05:16:38.399793 2305 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.147:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 9 05:16:38.399941 kubelet[2305]: E0909 05:16:38.399890 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.147:6443: connect: connection refused" interval="200ms" Sep 9 05:16:38.400273 kubelet[2305]: E0909 05:16:38.400235 2305 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 05:16:38.400346 kubelet[2305]: I0909 05:16:38.400332 2305 factory.go:223] Registration of the systemd container factory successfully Sep 9 05:16:38.400523 kubelet[2305]: I0909 05:16:38.400502 2305 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 05:16:38.401544 kubelet[2305]: I0909 05:16:38.401515 2305 factory.go:223] Registration of the containerd container factory successfully Sep 9 05:16:38.413889 kubelet[2305]: I0909 05:16:38.413854 2305 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 9 05:16:38.414486 kubelet[2305]: I0909 05:16:38.414453 2305 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 05:16:38.414486 kubelet[2305]: I0909 05:16:38.414475 2305 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 05:16:38.414486 kubelet[2305]: I0909 05:16:38.414493 2305 state_mem.go:36] "Initialized new in-memory state store" Sep 9 05:16:38.414897 kubelet[2305]: I0909 05:16:38.414756 2305 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 9 05:16:38.414897 kubelet[2305]: I0909 05:16:38.414785 2305 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 9 05:16:38.414897 kubelet[2305]: I0909 05:16:38.414807 2305 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 05:16:38.415585 kubelet[2305]: I0909 05:16:38.415555 2305 kubelet.go:2436] "Starting kubelet main sync loop" Sep 9 05:16:38.415634 kubelet[2305]: E0909 05:16:38.415607 2305 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 05:16:38.417619 kubelet[2305]: E0909 05:16:38.417582 2305 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.147:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.147:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 9 05:16:38.499901 kubelet[2305]: E0909 05:16:38.499863 2305 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:16:38.516115 kubelet[2305]: E0909 05:16:38.516083 2305 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 05:16:38.597401 kubelet[2305]: I0909 05:16:38.597286 2305 policy_none.go:49] "None policy: Start" Sep 9 05:16:38.597401 kubelet[2305]: I0909 05:16:38.597320 2305 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 05:16:38.597401 kubelet[2305]: I0909 05:16:38.597333 2305 state_mem.go:35] "Initializing new in-memory state store" Sep 9 05:16:38.600014 kubelet[2305]: E0909 05:16:38.599991 2305 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:16:38.600527 kubelet[2305]: E0909 05:16:38.600486 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.147:6443: connect: connection refused" interval="400ms" Sep 9 05:16:38.602996 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 05:16:38.618425 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 05:16:38.621325 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 05:16:38.641093 kubelet[2305]: E0909 05:16:38.641070 2305 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 9 05:16:38.641857 kubelet[2305]: I0909 05:16:38.641387 2305 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 05:16:38.641857 kubelet[2305]: I0909 05:16:38.641404 2305 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 05:16:38.641857 kubelet[2305]: I0909 05:16:38.641617 2305 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 05:16:38.643029 kubelet[2305]: E0909 05:16:38.643008 2305 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 05:16:38.643082 kubelet[2305]: E0909 05:16:38.643046 2305 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 9 05:16:38.726278 systemd[1]: Created slice kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice - libcontainer container kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice. Sep 9 05:16:38.743025 kubelet[2305]: I0909 05:16:38.742992 2305 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 05:16:38.743439 kubelet[2305]: E0909 05:16:38.743396 2305 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.147:6443/api/v1/nodes\": dial tcp 10.0.0.147:6443: connect: connection refused" node="localhost" Sep 9 05:16:38.754985 kubelet[2305]: E0909 05:16:38.754965 2305 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 05:16:38.758304 systemd[1]: Created slice kubepods-burstable-podad0b90b4b0cd08b87bbf6844c75ede0d.slice - libcontainer container kubepods-burstable-podad0b90b4b0cd08b87bbf6844c75ede0d.slice. Sep 9 05:16:38.777004 kubelet[2305]: E0909 05:16:38.776970 2305 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 05:16:38.779315 systemd[1]: Created slice kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice - libcontainer container kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice. Sep 9 05:16:38.780778 kubelet[2305]: E0909 05:16:38.780756 2305 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 05:16:38.801049 kubelet[2305]: I0909 05:16:38.801021 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ad0b90b4b0cd08b87bbf6844c75ede0d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ad0b90b4b0cd08b87bbf6844c75ede0d\") " pod="kube-system/kube-apiserver-localhost" Sep 9 05:16:38.801119 kubelet[2305]: I0909 05:16:38.801054 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ad0b90b4b0cd08b87bbf6844c75ede0d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ad0b90b4b0cd08b87bbf6844c75ede0d\") " pod="kube-system/kube-apiserver-localhost" Sep 9 05:16:38.801119 kubelet[2305]: I0909 05:16:38.801073 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ad0b90b4b0cd08b87bbf6844c75ede0d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ad0b90b4b0cd08b87bbf6844c75ede0d\") " pod="kube-system/kube-apiserver-localhost" Sep 9 05:16:38.801119 kubelet[2305]: I0909 05:16:38.801089 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:16:38.801119 kubelet[2305]: I0909 05:16:38.801105 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:16:38.801217 kubelet[2305]: I0909 05:16:38.801156 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 9 05:16:38.801217 kubelet[2305]: I0909 05:16:38.801173 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:16:38.801217 kubelet[2305]: I0909 05:16:38.801189 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:16:38.801275 kubelet[2305]: I0909 05:16:38.801219 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:16:38.945153 kubelet[2305]: I0909 05:16:38.945038 2305 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 05:16:38.945937 kubelet[2305]: E0909 05:16:38.945905 2305 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.147:6443/api/v1/nodes\": dial tcp 10.0.0.147:6443: connect: connection refused" node="localhost" Sep 9 05:16:39.001357 kubelet[2305]: E0909 05:16:39.001329 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.147:6443: connect: connection refused" interval="800ms" Sep 9 05:16:39.056531 containerd[1527]: time="2025-09-09T05:16:39.056281766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,}" Sep 9 05:16:39.073508 containerd[1527]: time="2025-09-09T05:16:39.072942488Z" level=info msg="connecting to shim eb1192e61278bb0cd549a51f4115cff564c9d29604e7266e87755a64ee3a9c71" address="unix:///run/containerd/s/b8dd47af42644803a2c999127845b74847692cc95df3bc6f7ab9b0ae14cdc259" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:16:39.079865 containerd[1527]: time="2025-09-09T05:16:39.079837381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ad0b90b4b0cd08b87bbf6844c75ede0d,Namespace:kube-system,Attempt:0,}" Sep 9 05:16:39.082377 containerd[1527]: time="2025-09-09T05:16:39.082350786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,}" Sep 9 05:16:39.101004 systemd[1]: Started cri-containerd-eb1192e61278bb0cd549a51f4115cff564c9d29604e7266e87755a64ee3a9c71.scope - libcontainer container eb1192e61278bb0cd549a51f4115cff564c9d29604e7266e87755a64ee3a9c71. Sep 9 05:16:39.109847 containerd[1527]: time="2025-09-09T05:16:39.109496323Z" level=info msg="connecting to shim 0509de9874181f1a8966e94688358450bb409d72c45da8822b6ee31e04a4c111" address="unix:///run/containerd/s/1e0a80efc4303fe2cf771103ed0a399e7049323f2d0f1f28c1e26e8e03dd46a9" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:16:39.116107 containerd[1527]: time="2025-09-09T05:16:39.116045174Z" level=info msg="connecting to shim 441e1226a726efcba089244d632ba7114df4fe303d2b973386158f14dce926cc" address="unix:///run/containerd/s/0d061da353f02100d59c4a06f651b9524e99ea75a19195797409eb56fe9ffe41" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:16:39.135008 systemd[1]: Started cri-containerd-0509de9874181f1a8966e94688358450bb409d72c45da8822b6ee31e04a4c111.scope - libcontainer container 0509de9874181f1a8966e94688358450bb409d72c45da8822b6ee31e04a4c111. Sep 9 05:16:39.139116 systemd[1]: Started cri-containerd-441e1226a726efcba089244d632ba7114df4fe303d2b973386158f14dce926cc.scope - libcontainer container 441e1226a726efcba089244d632ba7114df4fe303d2b973386158f14dce926cc. Sep 9 05:16:39.149852 containerd[1527]: time="2025-09-09T05:16:39.149793645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb1192e61278bb0cd549a51f4115cff564c9d29604e7266e87755a64ee3a9c71\"" Sep 9 05:16:39.156493 containerd[1527]: time="2025-09-09T05:16:39.156397701Z" level=info msg="CreateContainer within sandbox \"eb1192e61278bb0cd549a51f4115cff564c9d29604e7266e87755a64ee3a9c71\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 05:16:39.167273 containerd[1527]: time="2025-09-09T05:16:39.167024551Z" level=info msg="Container adc9e3ed49c52316621fd8ad9bd7c61c94bf8198e71dba0c3919fe2a23c3d0a0: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:16:39.175439 containerd[1527]: time="2025-09-09T05:16:39.175404813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ad0b90b4b0cd08b87bbf6844c75ede0d,Namespace:kube-system,Attempt:0,} returns sandbox id \"441e1226a726efcba089244d632ba7114df4fe303d2b973386158f14dce926cc\"" Sep 9 05:16:39.178216 containerd[1527]: time="2025-09-09T05:16:39.178181313Z" level=info msg="CreateContainer within sandbox \"eb1192e61278bb0cd549a51f4115cff564c9d29604e7266e87755a64ee3a9c71\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"adc9e3ed49c52316621fd8ad9bd7c61c94bf8198e71dba0c3919fe2a23c3d0a0\"" Sep 9 05:16:39.178792 containerd[1527]: time="2025-09-09T05:16:39.178763587Z" level=info msg="StartContainer for \"adc9e3ed49c52316621fd8ad9bd7c61c94bf8198e71dba0c3919fe2a23c3d0a0\"" Sep 9 05:16:39.179262 containerd[1527]: time="2025-09-09T05:16:39.179229086Z" level=info msg="CreateContainer within sandbox \"441e1226a726efcba089244d632ba7114df4fe303d2b973386158f14dce926cc\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 05:16:39.179777 containerd[1527]: time="2025-09-09T05:16:39.179735098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,} returns sandbox id \"0509de9874181f1a8966e94688358450bb409d72c45da8822b6ee31e04a4c111\"" Sep 9 05:16:39.179973 containerd[1527]: time="2025-09-09T05:16:39.179798830Z" level=info msg="connecting to shim adc9e3ed49c52316621fd8ad9bd7c61c94bf8198e71dba0c3919fe2a23c3d0a0" address="unix:///run/containerd/s/b8dd47af42644803a2c999127845b74847692cc95df3bc6f7ab9b0ae14cdc259" protocol=ttrpc version=3 Sep 9 05:16:39.183422 containerd[1527]: time="2025-09-09T05:16:39.183331105Z" level=info msg="CreateContainer within sandbox \"0509de9874181f1a8966e94688358450bb409d72c45da8822b6ee31e04a4c111\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 05:16:39.185794 containerd[1527]: time="2025-09-09T05:16:39.185761603Z" level=info msg="Container c0d8d060bda6079858ae0fdbfa9978e9b6845a9e5cfd5c2e164e6172498c5475: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:16:39.192390 containerd[1527]: time="2025-09-09T05:16:39.192359294Z" level=info msg="Container b77c4eff3bca633d8c279eb174cf878efb81342f367597cf8822af89b2a7d66e: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:16:39.197214 containerd[1527]: time="2025-09-09T05:16:39.197125093Z" level=info msg="CreateContainer within sandbox \"441e1226a726efcba089244d632ba7114df4fe303d2b973386158f14dce926cc\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c0d8d060bda6079858ae0fdbfa9978e9b6845a9e5cfd5c2e164e6172498c5475\"" Sep 9 05:16:39.197845 containerd[1527]: time="2025-09-09T05:16:39.197718136Z" level=info msg="StartContainer for \"c0d8d060bda6079858ae0fdbfa9978e9b6845a9e5cfd5c2e164e6172498c5475\"" Sep 9 05:16:39.198330 containerd[1527]: time="2025-09-09T05:16:39.198297888Z" level=info msg="CreateContainer within sandbox \"0509de9874181f1a8966e94688358450bb409d72c45da8822b6ee31e04a4c111\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b77c4eff3bca633d8c279eb174cf878efb81342f367597cf8822af89b2a7d66e\"" Sep 9 05:16:39.198677 containerd[1527]: time="2025-09-09T05:16:39.198655459Z" level=info msg="StartContainer for \"b77c4eff3bca633d8c279eb174cf878efb81342f367597cf8822af89b2a7d66e\"" Sep 9 05:16:39.198807 containerd[1527]: time="2025-09-09T05:16:39.198667108Z" level=info msg="connecting to shim c0d8d060bda6079858ae0fdbfa9978e9b6845a9e5cfd5c2e164e6172498c5475" address="unix:///run/containerd/s/0d061da353f02100d59c4a06f651b9524e99ea75a19195797409eb56fe9ffe41" protocol=ttrpc version=3 Sep 9 05:16:39.200883 containerd[1527]: time="2025-09-09T05:16:39.200857732Z" level=info msg="connecting to shim b77c4eff3bca633d8c279eb174cf878efb81342f367597cf8822af89b2a7d66e" address="unix:///run/containerd/s/1e0a80efc4303fe2cf771103ed0a399e7049323f2d0f1f28c1e26e8e03dd46a9" protocol=ttrpc version=3 Sep 9 05:16:39.202968 systemd[1]: Started cri-containerd-adc9e3ed49c52316621fd8ad9bd7c61c94bf8198e71dba0c3919fe2a23c3d0a0.scope - libcontainer container adc9e3ed49c52316621fd8ad9bd7c61c94bf8198e71dba0c3919fe2a23c3d0a0. Sep 9 05:16:39.227979 systemd[1]: Started cri-containerd-c0d8d060bda6079858ae0fdbfa9978e9b6845a9e5cfd5c2e164e6172498c5475.scope - libcontainer container c0d8d060bda6079858ae0fdbfa9978e9b6845a9e5cfd5c2e164e6172498c5475. Sep 9 05:16:39.232198 systemd[1]: Started cri-containerd-b77c4eff3bca633d8c279eb174cf878efb81342f367597cf8822af89b2a7d66e.scope - libcontainer container b77c4eff3bca633d8c279eb174cf878efb81342f367597cf8822af89b2a7d66e. Sep 9 05:16:39.244946 kubelet[2305]: E0909 05:16:39.244901 2305 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.147:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.147:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 9 05:16:39.264708 containerd[1527]: time="2025-09-09T05:16:39.264669435Z" level=info msg="StartContainer for \"adc9e3ed49c52316621fd8ad9bd7c61c94bf8198e71dba0c3919fe2a23c3d0a0\" returns successfully" Sep 9 05:16:39.279587 containerd[1527]: time="2025-09-09T05:16:39.279400946Z" level=info msg="StartContainer for \"c0d8d060bda6079858ae0fdbfa9978e9b6845a9e5cfd5c2e164e6172498c5475\" returns successfully" Sep 9 05:16:39.280130 kubelet[2305]: E0909 05:16:39.280100 2305 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.147:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.147:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 9 05:16:39.284863 containerd[1527]: time="2025-09-09T05:16:39.284812671Z" level=info msg="StartContainer for \"b77c4eff3bca633d8c279eb174cf878efb81342f367597cf8822af89b2a7d66e\" returns successfully" Sep 9 05:16:39.348085 kubelet[2305]: I0909 05:16:39.348054 2305 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 05:16:39.348900 kubelet[2305]: E0909 05:16:39.348357 2305 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.147:6443/api/v1/nodes\": dial tcp 10.0.0.147:6443: connect: connection refused" node="localhost" Sep 9 05:16:39.424446 kubelet[2305]: E0909 05:16:39.424416 2305 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 05:16:39.427131 kubelet[2305]: E0909 05:16:39.427113 2305 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 05:16:39.431083 kubelet[2305]: E0909 05:16:39.431059 2305 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 05:16:40.154136 kubelet[2305]: I0909 05:16:40.154099 2305 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 05:16:40.434812 kubelet[2305]: E0909 05:16:40.434641 2305 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 05:16:40.435212 kubelet[2305]: E0909 05:16:40.434808 2305 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 05:16:40.780552 kubelet[2305]: E0909 05:16:40.780415 2305 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 9 05:16:40.857666 kubelet[2305]: I0909 05:16:40.857625 2305 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 05:16:40.857666 kubelet[2305]: E0909 05:16:40.857666 2305 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 9 05:16:40.871152 kubelet[2305]: E0909 05:16:40.871115 2305 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:16:40.971938 kubelet[2305]: E0909 05:16:40.971899 2305 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:16:41.072558 kubelet[2305]: E0909 05:16:41.072455 2305 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:16:41.173113 kubelet[2305]: E0909 05:16:41.173069 2305 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:16:41.273660 kubelet[2305]: E0909 05:16:41.273615 2305 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:16:41.374569 kubelet[2305]: E0909 05:16:41.374472 2305 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:16:41.435483 kubelet[2305]: E0909 05:16:41.435454 2305 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 05:16:41.474891 kubelet[2305]: E0909 05:16:41.474857 2305 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:16:41.575603 kubelet[2305]: E0909 05:16:41.575556 2305 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:16:41.676793 kubelet[2305]: E0909 05:16:41.676648 2305 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:16:41.700218 kubelet[2305]: I0909 05:16:41.700169 2305 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 05:16:41.709434 kubelet[2305]: I0909 05:16:41.709389 2305 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 05:16:41.714056 kubelet[2305]: I0909 05:16:41.713529 2305 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 05:16:41.909783 kubelet[2305]: I0909 05:16:41.909747 2305 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 05:16:41.915112 kubelet[2305]: E0909 05:16:41.915043 2305 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 9 05:16:42.383945 kubelet[2305]: I0909 05:16:42.383850 2305 apiserver.go:52] "Watching apiserver" Sep 9 05:16:42.400166 kubelet[2305]: I0909 05:16:42.400120 2305 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 05:16:42.722739 systemd[1]: Reload requested from client PID 2586 ('systemctl') (unit session-7.scope)... Sep 9 05:16:42.722755 systemd[1]: Reloading... Sep 9 05:16:42.794846 zram_generator::config[2629]: No configuration found. Sep 9 05:16:42.961085 systemd[1]: Reloading finished in 238 ms. Sep 9 05:16:42.994162 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:16:43.015568 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 05:16:43.015788 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:16:43.015857 systemd[1]: kubelet.service: Consumed 953ms CPU time, 128.5M memory peak. Sep 9 05:16:43.017402 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:16:43.150360 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:16:43.154128 (kubelet)[2671]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 05:16:43.191081 kubelet[2671]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 05:16:43.191081 kubelet[2671]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 05:16:43.191081 kubelet[2671]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 05:16:43.191417 kubelet[2671]: I0909 05:16:43.191114 2671 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 05:16:43.199841 kubelet[2671]: I0909 05:16:43.198484 2671 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 9 05:16:43.199841 kubelet[2671]: I0909 05:16:43.198514 2671 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 05:16:43.199841 kubelet[2671]: I0909 05:16:43.198693 2671 server.go:956] "Client rotation is on, will bootstrap in background" Sep 9 05:16:43.200056 kubelet[2671]: I0909 05:16:43.200039 2671 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 9 05:16:43.203628 kubelet[2671]: I0909 05:16:43.203593 2671 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 05:16:43.206912 kubelet[2671]: I0909 05:16:43.206896 2671 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 05:16:43.209447 kubelet[2671]: I0909 05:16:43.209428 2671 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 05:16:43.209727 kubelet[2671]: I0909 05:16:43.209700 2671 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 05:16:43.209948 kubelet[2671]: I0909 05:16:43.209795 2671 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 05:16:43.210080 kubelet[2671]: I0909 05:16:43.210067 2671 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 05:16:43.210130 kubelet[2671]: I0909 05:16:43.210123 2671 container_manager_linux.go:303] "Creating device plugin manager" Sep 9 05:16:43.210218 kubelet[2671]: I0909 05:16:43.210209 2671 state_mem.go:36] "Initialized new in-memory state store" Sep 9 05:16:43.210434 kubelet[2671]: I0909 05:16:43.210422 2671 kubelet.go:480] "Attempting to sync node with API server" Sep 9 05:16:43.210983 kubelet[2671]: I0909 05:16:43.210965 2671 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 05:16:43.211098 kubelet[2671]: I0909 05:16:43.211087 2671 kubelet.go:386] "Adding apiserver pod source" Sep 9 05:16:43.211157 kubelet[2671]: I0909 05:16:43.211148 2671 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 05:16:43.213111 kubelet[2671]: I0909 05:16:43.213080 2671 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 05:16:43.213743 kubelet[2671]: I0909 05:16:43.213722 2671 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 9 05:16:43.218164 kubelet[2671]: I0909 05:16:43.218130 2671 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 05:16:43.218233 kubelet[2671]: I0909 05:16:43.218201 2671 server.go:1289] "Started kubelet" Sep 9 05:16:43.218721 kubelet[2671]: I0909 05:16:43.218683 2671 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 05:16:43.219727 kubelet[2671]: I0909 05:16:43.219704 2671 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 05:16:43.219913 kubelet[2671]: I0909 05:16:43.219889 2671 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 05:16:43.223567 kubelet[2671]: I0909 05:16:43.223120 2671 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 05:16:43.223567 kubelet[2671]: I0909 05:16:43.223506 2671 server.go:317] "Adding debug handlers to kubelet server" Sep 9 05:16:43.224304 kubelet[2671]: I0909 05:16:43.224270 2671 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 05:16:43.227985 kubelet[2671]: I0909 05:16:43.227958 2671 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 05:16:43.228079 kubelet[2671]: E0909 05:16:43.228061 2671 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:16:43.228607 kubelet[2671]: I0909 05:16:43.228577 2671 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 05:16:43.228687 kubelet[2671]: I0909 05:16:43.228672 2671 reconciler.go:26] "Reconciler: start to sync state" Sep 9 05:16:43.235683 kubelet[2671]: I0909 05:16:43.235485 2671 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 9 05:16:43.237644 kubelet[2671]: I0909 05:16:43.236482 2671 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 9 05:16:43.237644 kubelet[2671]: I0909 05:16:43.236506 2671 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 9 05:16:43.237644 kubelet[2671]: I0909 05:16:43.236525 2671 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 05:16:43.237644 kubelet[2671]: I0909 05:16:43.236531 2671 kubelet.go:2436] "Starting kubelet main sync loop" Sep 9 05:16:43.237644 kubelet[2671]: E0909 05:16:43.236564 2671 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 05:16:43.241664 kubelet[2671]: E0909 05:16:43.241635 2671 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 05:16:43.244050 kubelet[2671]: I0909 05:16:43.244027 2671 factory.go:223] Registration of the containerd container factory successfully Sep 9 05:16:43.244050 kubelet[2671]: I0909 05:16:43.244047 2671 factory.go:223] Registration of the systemd container factory successfully Sep 9 05:16:43.244143 kubelet[2671]: I0909 05:16:43.244129 2671 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 05:16:43.270392 kubelet[2671]: I0909 05:16:43.270295 2671 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 05:16:43.270392 kubelet[2671]: I0909 05:16:43.270313 2671 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 05:16:43.270392 kubelet[2671]: I0909 05:16:43.270335 2671 state_mem.go:36] "Initialized new in-memory state store" Sep 9 05:16:43.272329 kubelet[2671]: I0909 05:16:43.272296 2671 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 05:16:43.272329 kubelet[2671]: I0909 05:16:43.272318 2671 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 05:16:43.272392 kubelet[2671]: I0909 05:16:43.272335 2671 policy_none.go:49] "None policy: Start" Sep 9 05:16:43.272392 kubelet[2671]: I0909 05:16:43.272344 2671 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 05:16:43.272392 kubelet[2671]: I0909 05:16:43.272354 2671 state_mem.go:35] "Initializing new in-memory state store" Sep 9 05:16:43.272456 kubelet[2671]: I0909 05:16:43.272446 2671 state_mem.go:75] "Updated machine memory state" Sep 9 05:16:43.276045 kubelet[2671]: E0909 05:16:43.276020 2671 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 9 05:16:43.276224 kubelet[2671]: I0909 05:16:43.276201 2671 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 05:16:43.276517 kubelet[2671]: I0909 05:16:43.276220 2671 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 05:16:43.277004 kubelet[2671]: I0909 05:16:43.276978 2671 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 05:16:43.278947 kubelet[2671]: E0909 05:16:43.278805 2671 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 05:16:43.337909 kubelet[2671]: I0909 05:16:43.337874 2671 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 05:16:43.338043 kubelet[2671]: I0909 05:16:43.337923 2671 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 05:16:43.338043 kubelet[2671]: I0909 05:16:43.337883 2671 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 05:16:43.343797 kubelet[2671]: E0909 05:16:43.343763 2671 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 9 05:16:43.344141 kubelet[2671]: E0909 05:16:43.344120 2671 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 9 05:16:43.344702 kubelet[2671]: E0909 05:16:43.344633 2671 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 05:16:43.378954 kubelet[2671]: I0909 05:16:43.378925 2671 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 05:16:43.385820 kubelet[2671]: I0909 05:16:43.385795 2671 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 9 05:16:43.385881 kubelet[2671]: I0909 05:16:43.385874 2671 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 05:16:43.429125 kubelet[2671]: I0909 05:16:43.429082 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ad0b90b4b0cd08b87bbf6844c75ede0d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ad0b90b4b0cd08b87bbf6844c75ede0d\") " pod="kube-system/kube-apiserver-localhost" Sep 9 05:16:43.429125 kubelet[2671]: I0909 05:16:43.429119 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:16:43.429249 kubelet[2671]: I0909 05:16:43.429138 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:16:43.429249 kubelet[2671]: I0909 05:16:43.429154 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:16:43.429249 kubelet[2671]: I0909 05:16:43.429183 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:16:43.429249 kubelet[2671]: I0909 05:16:43.429209 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:16:43.429249 kubelet[2671]: I0909 05:16:43.429231 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 9 05:16:43.429355 kubelet[2671]: I0909 05:16:43.429245 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ad0b90b4b0cd08b87bbf6844c75ede0d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ad0b90b4b0cd08b87bbf6844c75ede0d\") " pod="kube-system/kube-apiserver-localhost" Sep 9 05:16:43.429355 kubelet[2671]: I0909 05:16:43.429261 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ad0b90b4b0cd08b87bbf6844c75ede0d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ad0b90b4b0cd08b87bbf6844c75ede0d\") " pod="kube-system/kube-apiserver-localhost" Sep 9 05:16:43.724802 sudo[2708]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 9 05:16:43.725493 sudo[2708]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 9 05:16:44.053755 sudo[2708]: pam_unix(sudo:session): session closed for user root Sep 9 05:16:44.211375 kubelet[2671]: I0909 05:16:44.211334 2671 apiserver.go:52] "Watching apiserver" Sep 9 05:16:44.229606 kubelet[2671]: I0909 05:16:44.229562 2671 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 05:16:44.253733 kubelet[2671]: I0909 05:16:44.253698 2671 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 05:16:44.253733 kubelet[2671]: I0909 05:16:44.253723 2671 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 05:16:44.254000 kubelet[2671]: I0909 05:16:44.253982 2671 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 05:16:44.261840 kubelet[2671]: E0909 05:16:44.260720 2671 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 05:16:44.261840 kubelet[2671]: E0909 05:16:44.261490 2671 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 9 05:16:44.267664 kubelet[2671]: E0909 05:16:44.267617 2671 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 9 05:16:44.271852 kubelet[2671]: I0909 05:16:44.271787 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.271762765 podStartE2EDuration="3.271762765s" podCreationTimestamp="2025-09-09 05:16:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:16:44.271636832 +0000 UTC m=+1.113664538" watchObservedRunningTime="2025-09-09 05:16:44.271762765 +0000 UTC m=+1.113790431" Sep 9 05:16:44.287430 kubelet[2671]: I0909 05:16:44.286974 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.286957671 podStartE2EDuration="3.286957671s" podCreationTimestamp="2025-09-09 05:16:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:16:44.279302474 +0000 UTC m=+1.121330180" watchObservedRunningTime="2025-09-09 05:16:44.286957671 +0000 UTC m=+1.128985377" Sep 9 05:16:44.287430 kubelet[2671]: I0909 05:16:44.287116 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.287111896 podStartE2EDuration="3.287111896s" podCreationTimestamp="2025-09-09 05:16:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:16:44.286418886 +0000 UTC m=+1.128446592" watchObservedRunningTime="2025-09-09 05:16:44.287111896 +0000 UTC m=+1.129139602" Sep 9 05:16:45.653717 sudo[1736]: pam_unix(sudo:session): session closed for user root Sep 9 05:16:45.654844 sshd[1735]: Connection closed by 10.0.0.1 port 50624 Sep 9 05:16:45.656384 sshd-session[1732]: pam_unix(sshd:session): session closed for user core Sep 9 05:16:45.659104 systemd[1]: sshd@6-10.0.0.147:22-10.0.0.1:50624.service: Deactivated successfully. Sep 9 05:16:45.661445 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 05:16:45.661730 systemd[1]: session-7.scope: Consumed 6.583s CPU time, 257.4M memory peak. Sep 9 05:16:45.662691 systemd-logind[1509]: Session 7 logged out. Waiting for processes to exit. Sep 9 05:16:45.664203 systemd-logind[1509]: Removed session 7. Sep 9 05:16:48.500845 kubelet[2671]: I0909 05:16:48.500785 2671 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 05:16:48.501166 containerd[1527]: time="2025-09-09T05:16:48.501120507Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 05:16:48.501586 kubelet[2671]: I0909 05:16:48.501549 2671 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 05:16:49.316493 systemd[1]: Created slice kubepods-besteffort-pod84c5ad2c_9bd3_47c6_b5d5_83c49f666884.slice - libcontainer container kubepods-besteffort-pod84c5ad2c_9bd3_47c6_b5d5_83c49f666884.slice. Sep 9 05:16:49.327432 systemd[1]: Created slice kubepods-burstable-pod2fd851e2_54c8_4c6d_8b3e_0ccbef1b90fb.slice - libcontainer container kubepods-burstable-pod2fd851e2_54c8_4c6d_8b3e_0ccbef1b90fb.slice. Sep 9 05:16:49.373190 kubelet[2671]: I0909 05:16:49.373104 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/84c5ad2c-9bd3-47c6-b5d5-83c49f666884-lib-modules\") pod \"kube-proxy-qxb7m\" (UID: \"84c5ad2c-9bd3-47c6-b5d5-83c49f666884\") " pod="kube-system/kube-proxy-qxb7m" Sep 9 05:16:49.373190 kubelet[2671]: I0909 05:16:49.373150 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb-cilium-cgroup\") pod \"cilium-lchzc\" (UID: \"2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb\") " pod="kube-system/cilium-lchzc" Sep 9 05:16:49.373190 kubelet[2671]: I0909 05:16:49.373166 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb-etc-cni-netd\") pod \"cilium-lchzc\" (UID: \"2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb\") " pod="kube-system/cilium-lchzc" Sep 9 05:16:49.373592 kubelet[2671]: I0909 05:16:49.373393 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb-lib-modules\") pod \"cilium-lchzc\" (UID: \"2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb\") " pod="kube-system/cilium-lchzc" Sep 9 05:16:49.373592 kubelet[2671]: I0909 05:16:49.373417 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb-xtables-lock\") pod \"cilium-lchzc\" (UID: \"2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb\") " pod="kube-system/cilium-lchzc" Sep 9 05:16:49.373592 kubelet[2671]: I0909 05:16:49.373447 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/84c5ad2c-9bd3-47c6-b5d5-83c49f666884-kube-proxy\") pod \"kube-proxy-qxb7m\" (UID: \"84c5ad2c-9bd3-47c6-b5d5-83c49f666884\") " pod="kube-system/kube-proxy-qxb7m" Sep 9 05:16:49.373592 kubelet[2671]: I0909 05:16:49.373466 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/84c5ad2c-9bd3-47c6-b5d5-83c49f666884-xtables-lock\") pod \"kube-proxy-qxb7m\" (UID: \"84c5ad2c-9bd3-47c6-b5d5-83c49f666884\") " pod="kube-system/kube-proxy-qxb7m" Sep 9 05:16:49.373592 kubelet[2671]: I0909 05:16:49.373485 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb-bpf-maps\") pod \"cilium-lchzc\" (UID: \"2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb\") " pod="kube-system/cilium-lchzc" Sep 9 05:16:49.373592 kubelet[2671]: I0909 05:16:49.373503 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb-clustermesh-secrets\") pod \"cilium-lchzc\" (UID: \"2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb\") " pod="kube-system/cilium-lchzc" Sep 9 05:16:49.373751 kubelet[2671]: I0909 05:16:49.373521 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb-cilium-config-path\") pod \"cilium-lchzc\" (UID: \"2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb\") " pod="kube-system/cilium-lchzc" Sep 9 05:16:49.373751 kubelet[2671]: I0909 05:16:49.373536 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb-hubble-tls\") pod \"cilium-lchzc\" (UID: \"2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb\") " pod="kube-system/cilium-lchzc" Sep 9 05:16:49.373751 kubelet[2671]: I0909 05:16:49.373553 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb-hostproc\") pod \"cilium-lchzc\" (UID: \"2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb\") " pod="kube-system/cilium-lchzc" Sep 9 05:16:49.373751 kubelet[2671]: I0909 05:16:49.373568 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb-cni-path\") pod \"cilium-lchzc\" (UID: \"2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb\") " pod="kube-system/cilium-lchzc" Sep 9 05:16:49.373751 kubelet[2671]: I0909 05:16:49.373603 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb-host-proc-sys-net\") pod \"cilium-lchzc\" (UID: \"2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb\") " pod="kube-system/cilium-lchzc" Sep 9 05:16:49.373751 kubelet[2671]: I0909 05:16:49.373651 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnpbq\" (UniqueName: \"kubernetes.io/projected/2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb-kube-api-access-hnpbq\") pod \"cilium-lchzc\" (UID: \"2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb\") " pod="kube-system/cilium-lchzc" Sep 9 05:16:49.373931 kubelet[2671]: I0909 05:16:49.373681 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlltt\" (UniqueName: \"kubernetes.io/projected/84c5ad2c-9bd3-47c6-b5d5-83c49f666884-kube-api-access-zlltt\") pod \"kube-proxy-qxb7m\" (UID: \"84c5ad2c-9bd3-47c6-b5d5-83c49f666884\") " pod="kube-system/kube-proxy-qxb7m" Sep 9 05:16:49.373931 kubelet[2671]: I0909 05:16:49.373702 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb-cilium-run\") pod \"cilium-lchzc\" (UID: \"2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb\") " pod="kube-system/cilium-lchzc" Sep 9 05:16:49.373931 kubelet[2671]: I0909 05:16:49.373717 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb-host-proc-sys-kernel\") pod \"cilium-lchzc\" (UID: \"2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb\") " pod="kube-system/cilium-lchzc" Sep 9 05:16:49.626384 containerd[1527]: time="2025-09-09T05:16:49.626049220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qxb7m,Uid:84c5ad2c-9bd3-47c6-b5d5-83c49f666884,Namespace:kube-system,Attempt:0,}" Sep 9 05:16:49.631480 containerd[1527]: time="2025-09-09T05:16:49.631446867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lchzc,Uid:2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb,Namespace:kube-system,Attempt:0,}" Sep 9 05:16:49.648311 containerd[1527]: time="2025-09-09T05:16:49.648276274Z" level=info msg="connecting to shim 789622f412b1e70f8900436e3509fc5228c6b1a5cc5db350fd80267b5276567c" address="unix:///run/containerd/s/fbc6317ff6a6bf6b4f45bd11ed21e48ef2edd4208c824830ad08d667c3a88a3e" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:16:49.652345 containerd[1527]: time="2025-09-09T05:16:49.652307845Z" level=info msg="connecting to shim 0d98a81bf857de62962b5fdacb51bdfc91e354c7e965a446aa244a65def356c2" address="unix:///run/containerd/s/b48391e19262fd549e14d451a8bf77122b8f2567235c268673353e5dfc3f28e7" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:16:49.697983 systemd[1]: Started cri-containerd-789622f412b1e70f8900436e3509fc5228c6b1a5cc5db350fd80267b5276567c.scope - libcontainer container 789622f412b1e70f8900436e3509fc5228c6b1a5cc5db350fd80267b5276567c. Sep 9 05:16:49.698945 systemd[1]: Created slice kubepods-besteffort-poda631756f_f623_4093_a7a3_dddf2be286f3.slice - libcontainer container kubepods-besteffort-poda631756f_f623_4093_a7a3_dddf2be286f3.slice. Sep 9 05:16:49.702347 systemd[1]: Started cri-containerd-0d98a81bf857de62962b5fdacb51bdfc91e354c7e965a446aa244a65def356c2.scope - libcontainer container 0d98a81bf857de62962b5fdacb51bdfc91e354c7e965a446aa244a65def356c2. Sep 9 05:16:49.729034 containerd[1527]: time="2025-09-09T05:16:49.728997518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qxb7m,Uid:84c5ad2c-9bd3-47c6-b5d5-83c49f666884,Namespace:kube-system,Attempt:0,} returns sandbox id \"789622f412b1e70f8900436e3509fc5228c6b1a5cc5db350fd80267b5276567c\"" Sep 9 05:16:49.731651 containerd[1527]: time="2025-09-09T05:16:49.731214830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lchzc,Uid:2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d98a81bf857de62962b5fdacb51bdfc91e354c7e965a446aa244a65def356c2\"" Sep 9 05:16:49.733103 containerd[1527]: time="2025-09-09T05:16:49.733074859Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 9 05:16:49.734397 containerd[1527]: time="2025-09-09T05:16:49.734363877Z" level=info msg="CreateContainer within sandbox \"789622f412b1e70f8900436e3509fc5228c6b1a5cc5db350fd80267b5276567c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 05:16:49.743986 containerd[1527]: time="2025-09-09T05:16:49.743953892Z" level=info msg="Container acbaae02926e2224ead3152b7be137fc9642acaf7ca6b7f985a2a7fc5c43cf0e: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:16:49.750619 containerd[1527]: time="2025-09-09T05:16:49.750490282Z" level=info msg="CreateContainer within sandbox \"789622f412b1e70f8900436e3509fc5228c6b1a5cc5db350fd80267b5276567c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"acbaae02926e2224ead3152b7be137fc9642acaf7ca6b7f985a2a7fc5c43cf0e\"" Sep 9 05:16:49.751911 containerd[1527]: time="2025-09-09T05:16:49.751853076Z" level=info msg="StartContainer for \"acbaae02926e2224ead3152b7be137fc9642acaf7ca6b7f985a2a7fc5c43cf0e\"" Sep 9 05:16:49.753220 containerd[1527]: time="2025-09-09T05:16:49.753176862Z" level=info msg="connecting to shim acbaae02926e2224ead3152b7be137fc9642acaf7ca6b7f985a2a7fc5c43cf0e" address="unix:///run/containerd/s/fbc6317ff6a6bf6b4f45bd11ed21e48ef2edd4208c824830ad08d667c3a88a3e" protocol=ttrpc version=3 Sep 9 05:16:49.769982 systemd[1]: Started cri-containerd-acbaae02926e2224ead3152b7be137fc9642acaf7ca6b7f985a2a7fc5c43cf0e.scope - libcontainer container acbaae02926e2224ead3152b7be137fc9642acaf7ca6b7f985a2a7fc5c43cf0e. Sep 9 05:16:49.775812 kubelet[2671]: I0909 05:16:49.775754 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hllhz\" (UniqueName: \"kubernetes.io/projected/a631756f-f623-4093-a7a3-dddf2be286f3-kube-api-access-hllhz\") pod \"cilium-operator-6c4d7847fc-ph4df\" (UID: \"a631756f-f623-4093-a7a3-dddf2be286f3\") " pod="kube-system/cilium-operator-6c4d7847fc-ph4df" Sep 9 05:16:49.776081 kubelet[2671]: I0909 05:16:49.775811 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a631756f-f623-4093-a7a3-dddf2be286f3-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-ph4df\" (UID: \"a631756f-f623-4093-a7a3-dddf2be286f3\") " pod="kube-system/cilium-operator-6c4d7847fc-ph4df" Sep 9 05:16:49.800915 containerd[1527]: time="2025-09-09T05:16:49.800876399Z" level=info msg="StartContainer for \"acbaae02926e2224ead3152b7be137fc9642acaf7ca6b7f985a2a7fc5c43cf0e\" returns successfully" Sep 9 05:16:50.002605 containerd[1527]: time="2025-09-09T05:16:50.002446307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-ph4df,Uid:a631756f-f623-4093-a7a3-dddf2be286f3,Namespace:kube-system,Attempt:0,}" Sep 9 05:16:50.017425 containerd[1527]: time="2025-09-09T05:16:50.017387969Z" level=info msg="connecting to shim 737f20412084e44f8d4a6b75e21427bf05affb07b5048b35d3bd410d363c4600" address="unix:///run/containerd/s/d8fb036e48a5016d5dfdf7c6480222a8f69d69559c589d34dc8dd6f82c4b8a92" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:16:50.038947 systemd[1]: Started cri-containerd-737f20412084e44f8d4a6b75e21427bf05affb07b5048b35d3bd410d363c4600.scope - libcontainer container 737f20412084e44f8d4a6b75e21427bf05affb07b5048b35d3bd410d363c4600. Sep 9 05:16:50.070449 containerd[1527]: time="2025-09-09T05:16:50.070382059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-ph4df,Uid:a631756f-f623-4093-a7a3-dddf2be286f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"737f20412084e44f8d4a6b75e21427bf05affb07b5048b35d3bd410d363c4600\"" Sep 9 05:16:50.277860 kubelet[2671]: I0909 05:16:50.277714 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qxb7m" podStartSLOduration=1.277512201 podStartE2EDuration="1.277512201s" podCreationTimestamp="2025-09-09 05:16:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:16:50.277402578 +0000 UTC m=+7.119430324" watchObservedRunningTime="2025-09-09 05:16:50.277512201 +0000 UTC m=+7.119539867" Sep 9 05:16:57.339105 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1521177575.mount: Deactivated successfully. Sep 9 05:16:58.726444 containerd[1527]: time="2025-09-09T05:16:58.726394599Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:16:58.727038 containerd[1527]: time="2025-09-09T05:16:58.727008486Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 9 05:16:58.728037 containerd[1527]: time="2025-09-09T05:16:58.728010628Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:16:58.729354 containerd[1527]: time="2025-09-09T05:16:58.729327455Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.996218869s" Sep 9 05:16:58.729412 containerd[1527]: time="2025-09-09T05:16:58.729360259Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 9 05:16:58.734416 containerd[1527]: time="2025-09-09T05:16:58.734137297Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 9 05:16:58.740388 containerd[1527]: time="2025-09-09T05:16:58.740354499Z" level=info msg="CreateContainer within sandbox \"0d98a81bf857de62962b5fdacb51bdfc91e354c7e965a446aa244a65def356c2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 05:16:58.750404 containerd[1527]: time="2025-09-09T05:16:58.750354758Z" level=info msg="Container c3e3c80b2f8e19ea4b27ff1f30b668f62781632afd7e62d92ec24ddda9a4d49b: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:16:58.770462 containerd[1527]: time="2025-09-09T05:16:58.770418805Z" level=info msg="CreateContainer within sandbox \"0d98a81bf857de62962b5fdacb51bdfc91e354c7e965a446aa244a65def356c2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c3e3c80b2f8e19ea4b27ff1f30b668f62781632afd7e62d92ec24ddda9a4d49b\"" Sep 9 05:16:58.776530 containerd[1527]: time="2025-09-09T05:16:58.776479705Z" level=info msg="StartContainer for \"c3e3c80b2f8e19ea4b27ff1f30b668f62781632afd7e62d92ec24ddda9a4d49b\"" Sep 9 05:16:58.780582 containerd[1527]: time="2025-09-09T05:16:58.780546722Z" level=info msg="connecting to shim c3e3c80b2f8e19ea4b27ff1f30b668f62781632afd7e62d92ec24ddda9a4d49b" address="unix:///run/containerd/s/b48391e19262fd549e14d451a8bf77122b8f2567235c268673353e5dfc3f28e7" protocol=ttrpc version=3 Sep 9 05:16:58.830469 systemd[1]: Started cri-containerd-c3e3c80b2f8e19ea4b27ff1f30b668f62781632afd7e62d92ec24ddda9a4d49b.scope - libcontainer container c3e3c80b2f8e19ea4b27ff1f30b668f62781632afd7e62d92ec24ddda9a4d49b. Sep 9 05:16:58.863445 containerd[1527]: time="2025-09-09T05:16:58.863409360Z" level=info msg="StartContainer for \"c3e3c80b2f8e19ea4b27ff1f30b668f62781632afd7e62d92ec24ddda9a4d49b\" returns successfully" Sep 9 05:16:58.869935 systemd[1]: cri-containerd-c3e3c80b2f8e19ea4b27ff1f30b668f62781632afd7e62d92ec24ddda9a4d49b.scope: Deactivated successfully. Sep 9 05:16:58.904839 containerd[1527]: time="2025-09-09T05:16:58.904223271Z" level=info msg="received exit event container_id:\"c3e3c80b2f8e19ea4b27ff1f30b668f62781632afd7e62d92ec24ddda9a4d49b\" id:\"c3e3c80b2f8e19ea4b27ff1f30b668f62781632afd7e62d92ec24ddda9a4d49b\" pid:3099 exited_at:{seconds:1757395018 nanos:894359151}" Sep 9 05:16:58.905416 containerd[1527]: time="2025-09-09T05:16:58.905377755Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c3e3c80b2f8e19ea4b27ff1f30b668f62781632afd7e62d92ec24ddda9a4d49b\" id:\"c3e3c80b2f8e19ea4b27ff1f30b668f62781632afd7e62d92ec24ddda9a4d49b\" pid:3099 exited_at:{seconds:1757395018 nanos:894359151}" Sep 9 05:16:58.937836 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c3e3c80b2f8e19ea4b27ff1f30b668f62781632afd7e62d92ec24ddda9a4d49b-rootfs.mount: Deactivated successfully. Sep 9 05:16:59.296374 containerd[1527]: time="2025-09-09T05:16:59.295873038Z" level=info msg="CreateContainer within sandbox \"0d98a81bf857de62962b5fdacb51bdfc91e354c7e965a446aa244a65def356c2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 05:16:59.303482 containerd[1527]: time="2025-09-09T05:16:59.303407894Z" level=info msg="Container 24db5f0b7792da9b9e6db58762c98db9273883852387902e07efc8a5b4acdbd2: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:16:59.309415 containerd[1527]: time="2025-09-09T05:16:59.309371218Z" level=info msg="CreateContainer within sandbox \"0d98a81bf857de62962b5fdacb51bdfc91e354c7e965a446aa244a65def356c2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"24db5f0b7792da9b9e6db58762c98db9273883852387902e07efc8a5b4acdbd2\"" Sep 9 05:16:59.310878 containerd[1527]: time="2025-09-09T05:16:59.310113998Z" level=info msg="StartContainer for \"24db5f0b7792da9b9e6db58762c98db9273883852387902e07efc8a5b4acdbd2\"" Sep 9 05:16:59.311147 containerd[1527]: time="2025-09-09T05:16:59.311118733Z" level=info msg="connecting to shim 24db5f0b7792da9b9e6db58762c98db9273883852387902e07efc8a5b4acdbd2" address="unix:///run/containerd/s/b48391e19262fd549e14d451a8bf77122b8f2567235c268673353e5dfc3f28e7" protocol=ttrpc version=3 Sep 9 05:16:59.330004 systemd[1]: Started cri-containerd-24db5f0b7792da9b9e6db58762c98db9273883852387902e07efc8a5b4acdbd2.scope - libcontainer container 24db5f0b7792da9b9e6db58762c98db9273883852387902e07efc8a5b4acdbd2. Sep 9 05:16:59.361351 containerd[1527]: time="2025-09-09T05:16:59.361312380Z" level=info msg="StartContainer for \"24db5f0b7792da9b9e6db58762c98db9273883852387902e07efc8a5b4acdbd2\" returns successfully" Sep 9 05:16:59.372283 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 05:16:59.372486 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 05:16:59.373075 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 9 05:16:59.374445 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 05:16:59.376881 systemd[1]: cri-containerd-24db5f0b7792da9b9e6db58762c98db9273883852387902e07efc8a5b4acdbd2.scope: Deactivated successfully. Sep 9 05:16:59.377453 containerd[1527]: time="2025-09-09T05:16:59.377411511Z" level=info msg="received exit event container_id:\"24db5f0b7792da9b9e6db58762c98db9273883852387902e07efc8a5b4acdbd2\" id:\"24db5f0b7792da9b9e6db58762c98db9273883852387902e07efc8a5b4acdbd2\" pid:3144 exited_at:{seconds:1757395019 nanos:377209203}" Sep 9 05:16:59.379053 containerd[1527]: time="2025-09-09T05:16:59.378963840Z" level=info msg="TaskExit event in podsandbox handler container_id:\"24db5f0b7792da9b9e6db58762c98db9273883852387902e07efc8a5b4acdbd2\" id:\"24db5f0b7792da9b9e6db58762c98db9273883852387902e07efc8a5b4acdbd2\" pid:3144 exited_at:{seconds:1757395019 nanos:377209203}" Sep 9 05:16:59.409145 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 05:16:59.561359 update_engine[1512]: I20250909 05:16:59.561017 1512 update_attempter.cc:509] Updating boot flags... Sep 9 05:16:59.918244 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3438049017.mount: Deactivated successfully. Sep 9 05:17:00.306330 containerd[1527]: time="2025-09-09T05:17:00.306211709Z" level=info msg="CreateContainer within sandbox \"0d98a81bf857de62962b5fdacb51bdfc91e354c7e965a446aa244a65def356c2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 05:17:00.320428 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2886607211.mount: Deactivated successfully. Sep 9 05:17:00.320955 containerd[1527]: time="2025-09-09T05:17:00.320910273Z" level=info msg="Container aa9de719f1bfd7a5ea157f2476fa85e480fae8469e71439315543e449b3336e7: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:17:00.329111 containerd[1527]: time="2025-09-09T05:17:00.329056078Z" level=info msg="CreateContainer within sandbox \"0d98a81bf857de62962b5fdacb51bdfc91e354c7e965a446aa244a65def356c2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"aa9de719f1bfd7a5ea157f2476fa85e480fae8469e71439315543e449b3336e7\"" Sep 9 05:17:00.329903 containerd[1527]: time="2025-09-09T05:17:00.329870222Z" level=info msg="StartContainer for \"aa9de719f1bfd7a5ea157f2476fa85e480fae8469e71439315543e449b3336e7\"" Sep 9 05:17:00.331438 containerd[1527]: time="2025-09-09T05:17:00.331409299Z" level=info msg="connecting to shim aa9de719f1bfd7a5ea157f2476fa85e480fae8469e71439315543e449b3336e7" address="unix:///run/containerd/s/b48391e19262fd549e14d451a8bf77122b8f2567235c268673353e5dfc3f28e7" protocol=ttrpc version=3 Sep 9 05:17:00.356008 systemd[1]: Started cri-containerd-aa9de719f1bfd7a5ea157f2476fa85e480fae8469e71439315543e449b3336e7.scope - libcontainer container aa9de719f1bfd7a5ea157f2476fa85e480fae8469e71439315543e449b3336e7. Sep 9 05:17:00.426613 systemd[1]: cri-containerd-aa9de719f1bfd7a5ea157f2476fa85e480fae8469e71439315543e449b3336e7.scope: Deactivated successfully. Sep 9 05:17:00.427357 containerd[1527]: time="2025-09-09T05:17:00.427327315Z" level=info msg="StartContainer for \"aa9de719f1bfd7a5ea157f2476fa85e480fae8469e71439315543e449b3336e7\" returns successfully" Sep 9 05:17:00.427982 containerd[1527]: time="2025-09-09T05:17:00.427580028Z" level=info msg="TaskExit event in podsandbox handler container_id:\"aa9de719f1bfd7a5ea157f2476fa85e480fae8469e71439315543e449b3336e7\" id:\"aa9de719f1bfd7a5ea157f2476fa85e480fae8469e71439315543e449b3336e7\" pid:3214 exited_at:{seconds:1757395020 nanos:427365240}" Sep 9 05:17:00.428113 containerd[1527]: time="2025-09-09T05:17:00.427627194Z" level=info msg="received exit event container_id:\"aa9de719f1bfd7a5ea157f2476fa85e480fae8469e71439315543e449b3336e7\" id:\"aa9de719f1bfd7a5ea157f2476fa85e480fae8469e71439315543e449b3336e7\" pid:3214 exited_at:{seconds:1757395020 nanos:427365240}" Sep 9 05:17:01.304418 containerd[1527]: time="2025-09-09T05:17:01.304372306Z" level=info msg="CreateContainer within sandbox \"0d98a81bf857de62962b5fdacb51bdfc91e354c7e965a446aa244a65def356c2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 05:17:01.329239 containerd[1527]: time="2025-09-09T05:17:01.329010511Z" level=info msg="Container 1668ff34c8e01e1256c21eac553c6acd49807db9c93aa47f69a80a372de428c9: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:17:01.332394 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2582861913.mount: Deactivated successfully. Sep 9 05:17:01.335506 containerd[1527]: time="2025-09-09T05:17:01.335452377Z" level=info msg="CreateContainer within sandbox \"0d98a81bf857de62962b5fdacb51bdfc91e354c7e965a446aa244a65def356c2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1668ff34c8e01e1256c21eac553c6acd49807db9c93aa47f69a80a372de428c9\"" Sep 9 05:17:01.337109 containerd[1527]: time="2025-09-09T05:17:01.335951998Z" level=info msg="StartContainer for \"1668ff34c8e01e1256c21eac553c6acd49807db9c93aa47f69a80a372de428c9\"" Sep 9 05:17:01.337109 containerd[1527]: time="2025-09-09T05:17:01.336766337Z" level=info msg="connecting to shim 1668ff34c8e01e1256c21eac553c6acd49807db9c93aa47f69a80a372de428c9" address="unix:///run/containerd/s/b48391e19262fd549e14d451a8bf77122b8f2567235c268673353e5dfc3f28e7" protocol=ttrpc version=3 Sep 9 05:17:01.358042 systemd[1]: Started cri-containerd-1668ff34c8e01e1256c21eac553c6acd49807db9c93aa47f69a80a372de428c9.scope - libcontainer container 1668ff34c8e01e1256c21eac553c6acd49807db9c93aa47f69a80a372de428c9. Sep 9 05:17:01.382176 systemd[1]: cri-containerd-1668ff34c8e01e1256c21eac553c6acd49807db9c93aa47f69a80a372de428c9.scope: Deactivated successfully. Sep 9 05:17:01.383463 containerd[1527]: time="2025-09-09T05:17:01.383020779Z" level=info msg="received exit event container_id:\"1668ff34c8e01e1256c21eac553c6acd49807db9c93aa47f69a80a372de428c9\" id:\"1668ff34c8e01e1256c21eac553c6acd49807db9c93aa47f69a80a372de428c9\" pid:3251 exited_at:{seconds:1757395021 nanos:382753267}" Sep 9 05:17:01.383463 containerd[1527]: time="2025-09-09T05:17:01.383035301Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1668ff34c8e01e1256c21eac553c6acd49807db9c93aa47f69a80a372de428c9\" id:\"1668ff34c8e01e1256c21eac553c6acd49807db9c93aa47f69a80a372de428c9\" pid:3251 exited_at:{seconds:1757395021 nanos:382753267}" Sep 9 05:17:01.384550 containerd[1527]: time="2025-09-09T05:17:01.384513761Z" level=info msg="StartContainer for \"1668ff34c8e01e1256c21eac553c6acd49807db9c93aa47f69a80a372de428c9\" returns successfully" Sep 9 05:17:01.402927 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1668ff34c8e01e1256c21eac553c6acd49807db9c93aa47f69a80a372de428c9-rootfs.mount: Deactivated successfully. Sep 9 05:17:02.324038 containerd[1527]: time="2025-09-09T05:17:02.323046447Z" level=info msg="CreateContainer within sandbox \"0d98a81bf857de62962b5fdacb51bdfc91e354c7e965a446aa244a65def356c2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 05:17:02.338363 containerd[1527]: time="2025-09-09T05:17:02.338298418Z" level=info msg="Container 6db6af2e09f018c8fb2da6e963ecb5da385ca64d5473c1f2379e8c28b578c355: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:17:02.338607 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1618555788.mount: Deactivated successfully. Sep 9 05:17:02.345336 containerd[1527]: time="2025-09-09T05:17:02.345302472Z" level=info msg="CreateContainer within sandbox \"0d98a81bf857de62962b5fdacb51bdfc91e354c7e965a446aa244a65def356c2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6db6af2e09f018c8fb2da6e963ecb5da385ca64d5473c1f2379e8c28b578c355\"" Sep 9 05:17:02.346868 containerd[1527]: time="2025-09-09T05:17:02.346687553Z" level=info msg="StartContainer for \"6db6af2e09f018c8fb2da6e963ecb5da385ca64d5473c1f2379e8c28b578c355\"" Sep 9 05:17:02.348293 containerd[1527]: time="2025-09-09T05:17:02.348223051Z" level=info msg="connecting to shim 6db6af2e09f018c8fb2da6e963ecb5da385ca64d5473c1f2379e8c28b578c355" address="unix:///run/containerd/s/b48391e19262fd549e14d451a8bf77122b8f2567235c268673353e5dfc3f28e7" protocol=ttrpc version=3 Sep 9 05:17:02.369981 systemd[1]: Started cri-containerd-6db6af2e09f018c8fb2da6e963ecb5da385ca64d5473c1f2379e8c28b578c355.scope - libcontainer container 6db6af2e09f018c8fb2da6e963ecb5da385ca64d5473c1f2379e8c28b578c355. Sep 9 05:17:02.402686 containerd[1527]: time="2025-09-09T05:17:02.402574524Z" level=info msg="StartContainer for \"6db6af2e09f018c8fb2da6e963ecb5da385ca64d5473c1f2379e8c28b578c355\" returns successfully" Sep 9 05:17:02.491140 containerd[1527]: time="2025-09-09T05:17:02.491101047Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:17:02.492133 containerd[1527]: time="2025-09-09T05:17:02.492103403Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 9 05:17:02.492979 containerd[1527]: time="2025-09-09T05:17:02.492947021Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:17:02.494221 containerd[1527]: time="2025-09-09T05:17:02.494196646Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.760023985s" Sep 9 05:17:02.494338 containerd[1527]: time="2025-09-09T05:17:02.494312380Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 9 05:17:02.498488 containerd[1527]: time="2025-09-09T05:17:02.498459022Z" level=info msg="CreateContainer within sandbox \"737f20412084e44f8d4a6b75e21427bf05affb07b5048b35d3bd410d363c4600\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 9 05:17:02.508104 containerd[1527]: time="2025-09-09T05:17:02.508003450Z" level=info msg="Container a03f3070f4366f117c93222b5f5f4e088165bf80be0a480b14099de569f0e8ab: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:17:02.513873 containerd[1527]: time="2025-09-09T05:17:02.513777241Z" level=info msg="CreateContainer within sandbox \"737f20412084e44f8d4a6b75e21427bf05affb07b5048b35d3bd410d363c4600\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"a03f3070f4366f117c93222b5f5f4e088165bf80be0a480b14099de569f0e8ab\"" Sep 9 05:17:02.514971 containerd[1527]: time="2025-09-09T05:17:02.514476242Z" level=info msg="StartContainer for \"a03f3070f4366f117c93222b5f5f4e088165bf80be0a480b14099de569f0e8ab\"" Sep 9 05:17:02.515667 containerd[1527]: time="2025-09-09T05:17:02.515643738Z" level=info msg="connecting to shim a03f3070f4366f117c93222b5f5f4e088165bf80be0a480b14099de569f0e8ab" address="unix:///run/containerd/s/d8fb036e48a5016d5dfdf7c6480222a8f69d69559c589d34dc8dd6f82c4b8a92" protocol=ttrpc version=3 Sep 9 05:17:02.533776 containerd[1527]: time="2025-09-09T05:17:02.533730758Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6db6af2e09f018c8fb2da6e963ecb5da385ca64d5473c1f2379e8c28b578c355\" id:\"0656353bd1c51011efbe7f1b97d0c3ac3c9b9fa91a1a415064b69b23ff834ba1\" pid:3327 exited_at:{seconds:1757395022 nanos:529300724}" Sep 9 05:17:02.540015 systemd[1]: Started cri-containerd-a03f3070f4366f117c93222b5f5f4e088165bf80be0a480b14099de569f0e8ab.scope - libcontainer container a03f3070f4366f117c93222b5f5f4e088165bf80be0a480b14099de569f0e8ab. Sep 9 05:17:02.549151 kubelet[2671]: I0909 05:17:02.549116 2671 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 9 05:17:02.619907 containerd[1527]: time="2025-09-09T05:17:02.618884329Z" level=info msg="StartContainer for \"a03f3070f4366f117c93222b5f5f4e088165bf80be0a480b14099de569f0e8ab\" returns successfully" Sep 9 05:17:02.664901 systemd[1]: Created slice kubepods-burstable-podc7b9f68d_b677_4728_ba63_92babed3d2ee.slice - libcontainer container kubepods-burstable-podc7b9f68d_b677_4728_ba63_92babed3d2ee.slice. Sep 9 05:17:02.674932 kubelet[2671]: I0909 05:17:02.668309 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7b9f68d-b677-4728-ba63-92babed3d2ee-config-volume\") pod \"coredns-674b8bbfcf-bdqzv\" (UID: \"c7b9f68d-b677-4728-ba63-92babed3d2ee\") " pod="kube-system/coredns-674b8bbfcf-bdqzv" Sep 9 05:17:02.674932 kubelet[2671]: I0909 05:17:02.672894 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c07a60ea-f4d7-495a-bab4-9aab22384941-config-volume\") pod \"coredns-674b8bbfcf-8n8vt\" (UID: \"c07a60ea-f4d7-495a-bab4-9aab22384941\") " pod="kube-system/coredns-674b8bbfcf-8n8vt" Sep 9 05:17:02.674932 kubelet[2671]: I0909 05:17:02.673109 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsl47\" (UniqueName: \"kubernetes.io/projected/c07a60ea-f4d7-495a-bab4-9aab22384941-kube-api-access-xsl47\") pod \"coredns-674b8bbfcf-8n8vt\" (UID: \"c07a60ea-f4d7-495a-bab4-9aab22384941\") " pod="kube-system/coredns-674b8bbfcf-8n8vt" Sep 9 05:17:02.674932 kubelet[2671]: I0909 05:17:02.673134 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc6vc\" (UniqueName: \"kubernetes.io/projected/c7b9f68d-b677-4728-ba63-92babed3d2ee-kube-api-access-bc6vc\") pod \"coredns-674b8bbfcf-bdqzv\" (UID: \"c7b9f68d-b677-4728-ba63-92babed3d2ee\") " pod="kube-system/coredns-674b8bbfcf-bdqzv" Sep 9 05:17:02.678604 systemd[1]: Created slice kubepods-burstable-podc07a60ea_f4d7_495a_bab4_9aab22384941.slice - libcontainer container kubepods-burstable-podc07a60ea_f4d7_495a_bab4_9aab22384941.slice. Sep 9 05:17:02.973835 containerd[1527]: time="2025-09-09T05:17:02.973708823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bdqzv,Uid:c7b9f68d-b677-4728-ba63-92babed3d2ee,Namespace:kube-system,Attempt:0,}" Sep 9 05:17:02.982357 containerd[1527]: time="2025-09-09T05:17:02.982226893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8n8vt,Uid:c07a60ea-f4d7-495a-bab4-9aab22384941,Namespace:kube-system,Attempt:0,}" Sep 9 05:17:03.364332 kubelet[2671]: I0909 05:17:03.364208 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-ph4df" podStartSLOduration=1.940400426 podStartE2EDuration="14.364193915s" podCreationTimestamp="2025-09-09 05:16:49 +0000 UTC" firstStartedPulling="2025-09-09 05:16:50.07171471 +0000 UTC m=+6.913742416" lastFinishedPulling="2025-09-09 05:17:02.495508199 +0000 UTC m=+19.337535905" observedRunningTime="2025-09-09 05:17:03.36415287 +0000 UTC m=+20.206180576" watchObservedRunningTime="2025-09-09 05:17:03.364193915 +0000 UTC m=+20.206221581" Sep 9 05:17:03.364638 kubelet[2671]: I0909 05:17:03.364399 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lchzc" podStartSLOduration=5.363035934 podStartE2EDuration="14.364394457s" podCreationTimestamp="2025-09-09 05:16:49 +0000 UTC" firstStartedPulling="2025-09-09 05:16:49.732587187 +0000 UTC m=+6.574614893" lastFinishedPulling="2025-09-09 05:16:58.73394575 +0000 UTC m=+15.575973416" observedRunningTime="2025-09-09 05:17:03.354906487 +0000 UTC m=+20.196934193" watchObservedRunningTime="2025-09-09 05:17:03.364394457 +0000 UTC m=+20.206422123" Sep 9 05:17:05.262206 systemd-networkd[1427]: cilium_host: Link UP Sep 9 05:17:05.262314 systemd-networkd[1427]: cilium_net: Link UP Sep 9 05:17:05.262428 systemd-networkd[1427]: cilium_net: Gained carrier Sep 9 05:17:05.262532 systemd-networkd[1427]: cilium_host: Gained carrier Sep 9 05:17:05.334119 systemd-networkd[1427]: cilium_vxlan: Link UP Sep 9 05:17:05.334125 systemd-networkd[1427]: cilium_vxlan: Gained carrier Sep 9 05:17:05.447952 systemd-networkd[1427]: cilium_host: Gained IPv6LL Sep 9 05:17:05.527925 systemd-networkd[1427]: cilium_net: Gained IPv6LL Sep 9 05:17:05.582859 kernel: NET: Registered PF_ALG protocol family Sep 9 05:17:06.139987 systemd-networkd[1427]: lxc_health: Link UP Sep 9 05:17:06.140209 systemd-networkd[1427]: lxc_health: Gained carrier Sep 9 05:17:06.541588 systemd-networkd[1427]: lxcaa09e97d642c: Link UP Sep 9 05:17:06.542941 kernel: eth0: renamed from tmpfbbdf Sep 9 05:17:06.543712 systemd-networkd[1427]: lxc482b1210fde0: Link UP Sep 9 05:17:06.545676 systemd-networkd[1427]: lxcaa09e97d642c: Gained carrier Sep 9 05:17:06.545942 kernel: eth0: renamed from tmp5b1d2 Sep 9 05:17:06.545987 systemd-networkd[1427]: lxc482b1210fde0: Gained carrier Sep 9 05:17:07.032981 systemd-networkd[1427]: cilium_vxlan: Gained IPv6LL Sep 9 05:17:07.480404 systemd-networkd[1427]: lxc_health: Gained IPv6LL Sep 9 05:17:07.735935 systemd-networkd[1427]: lxcaa09e97d642c: Gained IPv6LL Sep 9 05:17:08.376042 systemd-networkd[1427]: lxc482b1210fde0: Gained IPv6LL Sep 9 05:17:09.937996 containerd[1527]: time="2025-09-09T05:17:09.937941113Z" level=info msg="connecting to shim 5b1d2b6d59b202dc3d1afcabc678203af6deacb9e8331a3a33401e6cc8e067c6" address="unix:///run/containerd/s/610849563094c6d75e7c65708da439dfc69aa97a291f626f6f718f0cc34757c2" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:17:09.940851 containerd[1527]: time="2025-09-09T05:17:09.940669343Z" level=info msg="connecting to shim fbbdfa0bc7d5185787f9596b12fd62350a2430fc58f10f5971b5d1499cf60e63" address="unix:///run/containerd/s/51b1a6211de1492c78476e5674cb61eb685cfea7d6294915229a759b023c3ec0" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:17:09.968976 systemd[1]: Started cri-containerd-5b1d2b6d59b202dc3d1afcabc678203af6deacb9e8331a3a33401e6cc8e067c6.scope - libcontainer container 5b1d2b6d59b202dc3d1afcabc678203af6deacb9e8331a3a33401e6cc8e067c6. Sep 9 05:17:09.970289 systemd[1]: Started cri-containerd-fbbdfa0bc7d5185787f9596b12fd62350a2430fc58f10f5971b5d1499cf60e63.scope - libcontainer container fbbdfa0bc7d5185787f9596b12fd62350a2430fc58f10f5971b5d1499cf60e63. Sep 9 05:17:09.981173 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 05:17:09.981709 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 05:17:10.003779 containerd[1527]: time="2025-09-09T05:17:10.003653049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bdqzv,Uid:c7b9f68d-b677-4728-ba63-92babed3d2ee,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b1d2b6d59b202dc3d1afcabc678203af6deacb9e8331a3a33401e6cc8e067c6\"" Sep 9 05:17:10.004362 containerd[1527]: time="2025-09-09T05:17:10.004338504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8n8vt,Uid:c07a60ea-f4d7-495a-bab4-9aab22384941,Namespace:kube-system,Attempt:0,} returns sandbox id \"fbbdfa0bc7d5185787f9596b12fd62350a2430fc58f10f5971b5d1499cf60e63\"" Sep 9 05:17:10.008987 containerd[1527]: time="2025-09-09T05:17:10.008937076Z" level=info msg="CreateContainer within sandbox \"5b1d2b6d59b202dc3d1afcabc678203af6deacb9e8331a3a33401e6cc8e067c6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 05:17:10.009793 containerd[1527]: time="2025-09-09T05:17:10.009471880Z" level=info msg="CreateContainer within sandbox \"fbbdfa0bc7d5185787f9596b12fd62350a2430fc58f10f5971b5d1499cf60e63\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 05:17:10.017056 containerd[1527]: time="2025-09-09T05:17:10.017026171Z" level=info msg="Container fc1e9a28b6f35ef6bda57ddec7cb32f87cd5cbd769ad50f32b24f6c6dd046348: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:17:10.020041 containerd[1527]: time="2025-09-09T05:17:10.019998371Z" level=info msg="Container c9866a7de828cd92b3aba03f226e8c26e01fefcf73941c6b79f7f3512a778d9e: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:17:10.022240 containerd[1527]: time="2025-09-09T05:17:10.022206910Z" level=info msg="CreateContainer within sandbox \"fbbdfa0bc7d5185787f9596b12fd62350a2430fc58f10f5971b5d1499cf60e63\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fc1e9a28b6f35ef6bda57ddec7cb32f87cd5cbd769ad50f32b24f6c6dd046348\"" Sep 9 05:17:10.022884 containerd[1527]: time="2025-09-09T05:17:10.022857723Z" level=info msg="StartContainer for \"fc1e9a28b6f35ef6bda57ddec7cb32f87cd5cbd769ad50f32b24f6c6dd046348\"" Sep 9 05:17:10.023642 containerd[1527]: time="2025-09-09T05:17:10.023619824Z" level=info msg="connecting to shim fc1e9a28b6f35ef6bda57ddec7cb32f87cd5cbd769ad50f32b24f6c6dd046348" address="unix:///run/containerd/s/51b1a6211de1492c78476e5674cb61eb685cfea7d6294915229a759b023c3ec0" protocol=ttrpc version=3 Sep 9 05:17:10.033306 containerd[1527]: time="2025-09-09T05:17:10.033155476Z" level=info msg="CreateContainer within sandbox \"5b1d2b6d59b202dc3d1afcabc678203af6deacb9e8331a3a33401e6cc8e067c6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c9866a7de828cd92b3aba03f226e8c26e01fefcf73941c6b79f7f3512a778d9e\"" Sep 9 05:17:10.034205 containerd[1527]: time="2025-09-09T05:17:10.034172598Z" level=info msg="StartContainer for \"c9866a7de828cd92b3aba03f226e8c26e01fefcf73941c6b79f7f3512a778d9e\"" Sep 9 05:17:10.036303 containerd[1527]: time="2025-09-09T05:17:10.036263487Z" level=info msg="connecting to shim c9866a7de828cd92b3aba03f226e8c26e01fefcf73941c6b79f7f3512a778d9e" address="unix:///run/containerd/s/610849563094c6d75e7c65708da439dfc69aa97a291f626f6f718f0cc34757c2" protocol=ttrpc version=3 Sep 9 05:17:10.044980 systemd[1]: Started cri-containerd-fc1e9a28b6f35ef6bda57ddec7cb32f87cd5cbd769ad50f32b24f6c6dd046348.scope - libcontainer container fc1e9a28b6f35ef6bda57ddec7cb32f87cd5cbd769ad50f32b24f6c6dd046348. Sep 9 05:17:10.066006 systemd[1]: Started cri-containerd-c9866a7de828cd92b3aba03f226e8c26e01fefcf73941c6b79f7f3512a778d9e.scope - libcontainer container c9866a7de828cd92b3aba03f226e8c26e01fefcf73941c6b79f7f3512a778d9e. Sep 9 05:17:10.104235 containerd[1527]: time="2025-09-09T05:17:10.104195824Z" level=info msg="StartContainer for \"fc1e9a28b6f35ef6bda57ddec7cb32f87cd5cbd769ad50f32b24f6c6dd046348\" returns successfully" Sep 9 05:17:10.106849 containerd[1527]: time="2025-09-09T05:17:10.106749350Z" level=info msg="StartContainer for \"c9866a7de828cd92b3aba03f226e8c26e01fefcf73941c6b79f7f3512a778d9e\" returns successfully" Sep 9 05:17:10.381901 kubelet[2671]: I0909 05:17:10.381718 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-8n8vt" podStartSLOduration=21.381702238 podStartE2EDuration="21.381702238s" podCreationTimestamp="2025-09-09 05:16:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:17:10.372419087 +0000 UTC m=+27.214446793" watchObservedRunningTime="2025-09-09 05:17:10.381702238 +0000 UTC m=+27.223729944" Sep 9 05:17:10.391557 kubelet[2671]: I0909 05:17:10.391505 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-bdqzv" podStartSLOduration=21.39148931 podStartE2EDuration="21.39148931s" podCreationTimestamp="2025-09-09 05:16:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:17:10.391475508 +0000 UTC m=+27.233503214" watchObservedRunningTime="2025-09-09 05:17:10.39148931 +0000 UTC m=+27.233517016" Sep 9 05:17:12.721955 systemd[1]: Started sshd@7-10.0.0.147:22-10.0.0.1:44420.service - OpenSSH per-connection server daemon (10.0.0.1:44420). Sep 9 05:17:12.792773 sshd[4013]: Accepted publickey for core from 10.0.0.1 port 44420 ssh2: RSA SHA256:BZm90Ok3j8HCXtlwShuWuMQDPsEE0kFrFWmP82ap/wE Sep 9 05:17:12.793954 sshd-session[4013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:17:12.797951 systemd-logind[1509]: New session 8 of user core. Sep 9 05:17:12.803978 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 05:17:12.921771 sshd[4016]: Connection closed by 10.0.0.1 port 44420 Sep 9 05:17:12.922064 sshd-session[4013]: pam_unix(sshd:session): session closed for user core Sep 9 05:17:12.925169 systemd[1]: sshd@7-10.0.0.147:22-10.0.0.1:44420.service: Deactivated successfully. Sep 9 05:17:12.928292 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 05:17:12.928998 systemd-logind[1509]: Session 8 logged out. Waiting for processes to exit. Sep 9 05:17:12.930324 systemd-logind[1509]: Removed session 8. Sep 9 05:17:16.129029 kubelet[2671]: I0909 05:17:16.128980 2671 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 05:17:17.937208 systemd[1]: Started sshd@8-10.0.0.147:22-10.0.0.1:44430.service - OpenSSH per-connection server daemon (10.0.0.1:44430). Sep 9 05:17:17.999375 sshd[4031]: Accepted publickey for core from 10.0.0.1 port 44430 ssh2: RSA SHA256:BZm90Ok3j8HCXtlwShuWuMQDPsEE0kFrFWmP82ap/wE Sep 9 05:17:18.000590 sshd-session[4031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:17:18.004970 systemd-logind[1509]: New session 9 of user core. Sep 9 05:17:18.016051 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 05:17:18.128146 sshd[4034]: Connection closed by 10.0.0.1 port 44430 Sep 9 05:17:18.128679 sshd-session[4031]: pam_unix(sshd:session): session closed for user core Sep 9 05:17:18.132573 systemd[1]: sshd@8-10.0.0.147:22-10.0.0.1:44430.service: Deactivated successfully. Sep 9 05:17:18.135194 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 05:17:18.135941 systemd-logind[1509]: Session 9 logged out. Waiting for processes to exit. Sep 9 05:17:18.136998 systemd-logind[1509]: Removed session 9. Sep 9 05:17:23.143986 systemd[1]: Started sshd@9-10.0.0.147:22-10.0.0.1:58634.service - OpenSSH per-connection server daemon (10.0.0.1:58634). Sep 9 05:17:23.208079 sshd[4051]: Accepted publickey for core from 10.0.0.1 port 58634 ssh2: RSA SHA256:BZm90Ok3j8HCXtlwShuWuMQDPsEE0kFrFWmP82ap/wE Sep 9 05:17:23.209180 sshd-session[4051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:17:23.214893 systemd-logind[1509]: New session 10 of user core. Sep 9 05:17:23.225030 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 05:17:23.340501 sshd[4054]: Connection closed by 10.0.0.1 port 58634 Sep 9 05:17:23.341049 sshd-session[4051]: pam_unix(sshd:session): session closed for user core Sep 9 05:17:23.350897 systemd[1]: sshd@9-10.0.0.147:22-10.0.0.1:58634.service: Deactivated successfully. Sep 9 05:17:23.352632 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 05:17:23.354489 systemd-logind[1509]: Session 10 logged out. Waiting for processes to exit. Sep 9 05:17:23.356723 systemd[1]: Started sshd@10-10.0.0.147:22-10.0.0.1:58636.service - OpenSSH per-connection server daemon (10.0.0.1:58636). Sep 9 05:17:23.357556 systemd-logind[1509]: Removed session 10. Sep 9 05:17:23.412338 sshd[4069]: Accepted publickey for core from 10.0.0.1 port 58636 ssh2: RSA SHA256:BZm90Ok3j8HCXtlwShuWuMQDPsEE0kFrFWmP82ap/wE Sep 9 05:17:23.414766 sshd-session[4069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:17:23.420434 systemd-logind[1509]: New session 11 of user core. Sep 9 05:17:23.430960 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 05:17:23.594794 sshd[4072]: Connection closed by 10.0.0.1 port 58636 Sep 9 05:17:23.595855 sshd-session[4069]: pam_unix(sshd:session): session closed for user core Sep 9 05:17:23.610739 systemd[1]: sshd@10-10.0.0.147:22-10.0.0.1:58636.service: Deactivated successfully. Sep 9 05:17:23.613018 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 05:17:23.617108 systemd-logind[1509]: Session 11 logged out. Waiting for processes to exit. Sep 9 05:17:23.619404 systemd[1]: Started sshd@11-10.0.0.147:22-10.0.0.1:58652.service - OpenSSH per-connection server daemon (10.0.0.1:58652). Sep 9 05:17:23.620693 systemd-logind[1509]: Removed session 11. Sep 9 05:17:23.678233 sshd[4084]: Accepted publickey for core from 10.0.0.1 port 58652 ssh2: RSA SHA256:BZm90Ok3j8HCXtlwShuWuMQDPsEE0kFrFWmP82ap/wE Sep 9 05:17:23.679452 sshd-session[4084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:17:23.684561 systemd-logind[1509]: New session 12 of user core. Sep 9 05:17:23.691945 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 05:17:23.810287 sshd[4087]: Connection closed by 10.0.0.1 port 58652 Sep 9 05:17:23.810607 sshd-session[4084]: pam_unix(sshd:session): session closed for user core Sep 9 05:17:23.813905 systemd[1]: sshd@11-10.0.0.147:22-10.0.0.1:58652.service: Deactivated successfully. Sep 9 05:17:23.815428 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 05:17:23.817997 systemd-logind[1509]: Session 12 logged out. Waiting for processes to exit. Sep 9 05:17:23.819557 systemd-logind[1509]: Removed session 12. Sep 9 05:17:28.828945 systemd[1]: Started sshd@12-10.0.0.147:22-10.0.0.1:58656.service - OpenSSH per-connection server daemon (10.0.0.1:58656). Sep 9 05:17:28.885596 sshd[4102]: Accepted publickey for core from 10.0.0.1 port 58656 ssh2: RSA SHA256:BZm90Ok3j8HCXtlwShuWuMQDPsEE0kFrFWmP82ap/wE Sep 9 05:17:28.886737 sshd-session[4102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:17:28.891122 systemd-logind[1509]: New session 13 of user core. Sep 9 05:17:28.903994 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 05:17:29.008731 sshd[4105]: Connection closed by 10.0.0.1 port 58656 Sep 9 05:17:29.009041 sshd-session[4102]: pam_unix(sshd:session): session closed for user core Sep 9 05:17:29.011855 systemd[1]: sshd@12-10.0.0.147:22-10.0.0.1:58656.service: Deactivated successfully. Sep 9 05:17:29.013885 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 05:17:29.015598 systemd-logind[1509]: Session 13 logged out. Waiting for processes to exit. Sep 9 05:17:29.017652 systemd-logind[1509]: Removed session 13. Sep 9 05:17:34.025043 systemd[1]: Started sshd@13-10.0.0.147:22-10.0.0.1:38676.service - OpenSSH per-connection server daemon (10.0.0.1:38676). Sep 9 05:17:34.082941 sshd[4118]: Accepted publickey for core from 10.0.0.1 port 38676 ssh2: RSA SHA256:BZm90Ok3j8HCXtlwShuWuMQDPsEE0kFrFWmP82ap/wE Sep 9 05:17:34.084184 sshd-session[4118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:17:34.088618 systemd-logind[1509]: New session 14 of user core. Sep 9 05:17:34.102982 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 05:17:34.213842 sshd[4121]: Connection closed by 10.0.0.1 port 38676 Sep 9 05:17:34.212770 sshd-session[4118]: pam_unix(sshd:session): session closed for user core Sep 9 05:17:34.222983 systemd[1]: sshd@13-10.0.0.147:22-10.0.0.1:38676.service: Deactivated successfully. Sep 9 05:17:34.224461 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 05:17:34.225179 systemd-logind[1509]: Session 14 logged out. Waiting for processes to exit. Sep 9 05:17:34.227098 systemd[1]: Started sshd@14-10.0.0.147:22-10.0.0.1:38686.service - OpenSSH per-connection server daemon (10.0.0.1:38686). Sep 9 05:17:34.228689 systemd-logind[1509]: Removed session 14. Sep 9 05:17:34.284584 sshd[4134]: Accepted publickey for core from 10.0.0.1 port 38686 ssh2: RSA SHA256:BZm90Ok3j8HCXtlwShuWuMQDPsEE0kFrFWmP82ap/wE Sep 9 05:17:34.285576 sshd-session[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:17:34.289049 systemd-logind[1509]: New session 15 of user core. Sep 9 05:17:34.301977 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 05:17:34.513604 sshd[4137]: Connection closed by 10.0.0.1 port 38686 Sep 9 05:17:34.514214 sshd-session[4134]: pam_unix(sshd:session): session closed for user core Sep 9 05:17:34.528765 systemd[1]: sshd@14-10.0.0.147:22-10.0.0.1:38686.service: Deactivated successfully. Sep 9 05:17:34.530282 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 05:17:34.530976 systemd-logind[1509]: Session 15 logged out. Waiting for processes to exit. Sep 9 05:17:34.533343 systemd[1]: Started sshd@15-10.0.0.147:22-10.0.0.1:38688.service - OpenSSH per-connection server daemon (10.0.0.1:38688). Sep 9 05:17:34.534209 systemd-logind[1509]: Removed session 15. Sep 9 05:17:34.591231 sshd[4149]: Accepted publickey for core from 10.0.0.1 port 38688 ssh2: RSA SHA256:BZm90Ok3j8HCXtlwShuWuMQDPsEE0kFrFWmP82ap/wE Sep 9 05:17:34.592325 sshd-session[4149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:17:34.595871 systemd-logind[1509]: New session 16 of user core. Sep 9 05:17:34.604967 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 05:17:35.194791 sshd[4152]: Connection closed by 10.0.0.1 port 38688 Sep 9 05:17:35.195173 sshd-session[4149]: pam_unix(sshd:session): session closed for user core Sep 9 05:17:35.206543 systemd[1]: sshd@15-10.0.0.147:22-10.0.0.1:38688.service: Deactivated successfully. Sep 9 05:17:35.211615 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 05:17:35.213869 systemd-logind[1509]: Session 16 logged out. Waiting for processes to exit. Sep 9 05:17:35.216251 systemd[1]: Started sshd@16-10.0.0.147:22-10.0.0.1:38702.service - OpenSSH per-connection server daemon (10.0.0.1:38702). Sep 9 05:17:35.221030 systemd-logind[1509]: Removed session 16. Sep 9 05:17:35.276778 sshd[4172]: Accepted publickey for core from 10.0.0.1 port 38702 ssh2: RSA SHA256:BZm90Ok3j8HCXtlwShuWuMQDPsEE0kFrFWmP82ap/wE Sep 9 05:17:35.277876 sshd-session[4172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:17:35.281520 systemd-logind[1509]: New session 17 of user core. Sep 9 05:17:35.292956 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 05:17:35.504898 sshd[4175]: Connection closed by 10.0.0.1 port 38702 Sep 9 05:17:35.504960 sshd-session[4172]: pam_unix(sshd:session): session closed for user core Sep 9 05:17:35.512749 systemd[1]: sshd@16-10.0.0.147:22-10.0.0.1:38702.service: Deactivated successfully. Sep 9 05:17:35.516319 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 05:17:35.516986 systemd-logind[1509]: Session 17 logged out. Waiting for processes to exit. Sep 9 05:17:35.519481 systemd[1]: Started sshd@17-10.0.0.147:22-10.0.0.1:38714.service - OpenSSH per-connection server daemon (10.0.0.1:38714). Sep 9 05:17:35.520100 systemd-logind[1509]: Removed session 17. Sep 9 05:17:35.577696 sshd[4187]: Accepted publickey for core from 10.0.0.1 port 38714 ssh2: RSA SHA256:BZm90Ok3j8HCXtlwShuWuMQDPsEE0kFrFWmP82ap/wE Sep 9 05:17:35.578855 sshd-session[4187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:17:35.582413 systemd-logind[1509]: New session 18 of user core. Sep 9 05:17:35.592627 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 05:17:35.702840 sshd[4190]: Connection closed by 10.0.0.1 port 38714 Sep 9 05:17:35.703298 sshd-session[4187]: pam_unix(sshd:session): session closed for user core Sep 9 05:17:35.706796 systemd[1]: sshd@17-10.0.0.147:22-10.0.0.1:38714.service: Deactivated successfully. Sep 9 05:17:35.708450 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 05:17:35.709096 systemd-logind[1509]: Session 18 logged out. Waiting for processes to exit. Sep 9 05:17:35.709952 systemd-logind[1509]: Removed session 18. Sep 9 05:17:40.721782 systemd[1]: Started sshd@18-10.0.0.147:22-10.0.0.1:33068.service - OpenSSH per-connection server daemon (10.0.0.1:33068). Sep 9 05:17:40.773168 sshd[4207]: Accepted publickey for core from 10.0.0.1 port 33068 ssh2: RSA SHA256:BZm90Ok3j8HCXtlwShuWuMQDPsEE0kFrFWmP82ap/wE Sep 9 05:17:40.774190 sshd-session[4207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:17:40.778137 systemd-logind[1509]: New session 19 of user core. Sep 9 05:17:40.787222 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 05:17:40.895723 sshd[4210]: Connection closed by 10.0.0.1 port 33068 Sep 9 05:17:40.895648 sshd-session[4207]: pam_unix(sshd:session): session closed for user core Sep 9 05:17:40.899280 systemd[1]: sshd@18-10.0.0.147:22-10.0.0.1:33068.service: Deactivated successfully. Sep 9 05:17:40.901247 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 05:17:40.902525 systemd-logind[1509]: Session 19 logged out. Waiting for processes to exit. Sep 9 05:17:40.903427 systemd-logind[1509]: Removed session 19. Sep 9 05:17:45.907022 systemd[1]: Started sshd@19-10.0.0.147:22-10.0.0.1:33078.service - OpenSSH per-connection server daemon (10.0.0.1:33078). Sep 9 05:17:45.958367 sshd[4226]: Accepted publickey for core from 10.0.0.1 port 33078 ssh2: RSA SHA256:BZm90Ok3j8HCXtlwShuWuMQDPsEE0kFrFWmP82ap/wE Sep 9 05:17:45.959380 sshd-session[4226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:17:45.963382 systemd-logind[1509]: New session 20 of user core. Sep 9 05:17:45.973034 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 05:17:46.078872 sshd[4229]: Connection closed by 10.0.0.1 port 33078 Sep 9 05:17:46.079187 sshd-session[4226]: pam_unix(sshd:session): session closed for user core Sep 9 05:17:46.088745 systemd[1]: sshd@19-10.0.0.147:22-10.0.0.1:33078.service: Deactivated successfully. Sep 9 05:17:46.090226 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 05:17:46.091889 systemd-logind[1509]: Session 20 logged out. Waiting for processes to exit. Sep 9 05:17:46.093217 systemd[1]: Started sshd@20-10.0.0.147:22-10.0.0.1:33080.service - OpenSSH per-connection server daemon (10.0.0.1:33080). Sep 9 05:17:46.094647 systemd-logind[1509]: Removed session 20. Sep 9 05:17:46.153985 sshd[4242]: Accepted publickey for core from 10.0.0.1 port 33080 ssh2: RSA SHA256:BZm90Ok3j8HCXtlwShuWuMQDPsEE0kFrFWmP82ap/wE Sep 9 05:17:46.155068 sshd-session[4242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:17:46.158885 systemd-logind[1509]: New session 21 of user core. Sep 9 05:17:46.164972 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 9 05:17:47.996715 containerd[1527]: time="2025-09-09T05:17:47.996552085Z" level=info msg="StopContainer for \"a03f3070f4366f117c93222b5f5f4e088165bf80be0a480b14099de569f0e8ab\" with timeout 30 (s)" Sep 9 05:17:47.998025 containerd[1527]: time="2025-09-09T05:17:47.997999134Z" level=info msg="Stop container \"a03f3070f4366f117c93222b5f5f4e088165bf80be0a480b14099de569f0e8ab\" with signal terminated" Sep 9 05:17:48.010478 systemd[1]: cri-containerd-a03f3070f4366f117c93222b5f5f4e088165bf80be0a480b14099de569f0e8ab.scope: Deactivated successfully. Sep 9 05:17:48.014155 containerd[1527]: time="2025-09-09T05:17:48.014104150Z" level=info msg="received exit event container_id:\"a03f3070f4366f117c93222b5f5f4e088165bf80be0a480b14099de569f0e8ab\" id:\"a03f3070f4366f117c93222b5f5f4e088165bf80be0a480b14099de569f0e8ab\" pid:3369 exited_at:{seconds:1757395068 nanos:11675869}" Sep 9 05:17:48.014533 containerd[1527]: time="2025-09-09T05:17:48.014398200Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a03f3070f4366f117c93222b5f5f4e088165bf80be0a480b14099de569f0e8ab\" id:\"a03f3070f4366f117c93222b5f5f4e088165bf80be0a480b14099de569f0e8ab\" pid:3369 exited_at:{seconds:1757395068 nanos:11675869}" Sep 9 05:17:48.036881 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a03f3070f4366f117c93222b5f5f4e088165bf80be0a480b14099de569f0e8ab-rootfs.mount: Deactivated successfully. Sep 9 05:17:48.037416 containerd[1527]: time="2025-09-09T05:17:48.037284001Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6db6af2e09f018c8fb2da6e963ecb5da385ca64d5473c1f2379e8c28b578c355\" id:\"af6054c3a80167661d5645da84666e222bef1f869e0d283a43b49dda094e4611\" pid:4280 exited_at:{seconds:1757395068 nanos:37031913}" Sep 9 05:17:48.040236 containerd[1527]: time="2025-09-09T05:17:48.040164417Z" level=info msg="StopContainer for \"6db6af2e09f018c8fb2da6e963ecb5da385ca64d5473c1f2379e8c28b578c355\" with timeout 2 (s)" Sep 9 05:17:48.040534 containerd[1527]: time="2025-09-09T05:17:48.040514148Z" level=info msg="Stop container \"6db6af2e09f018c8fb2da6e963ecb5da385ca64d5473c1f2379e8c28b578c355\" with signal terminated" Sep 9 05:17:48.042634 containerd[1527]: time="2025-09-09T05:17:48.042588897Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 05:17:48.046312 containerd[1527]: time="2025-09-09T05:17:48.046285980Z" level=info msg="StopContainer for \"a03f3070f4366f117c93222b5f5f4e088165bf80be0a480b14099de569f0e8ab\" returns successfully" Sep 9 05:17:48.050431 systemd-networkd[1427]: lxc_health: Link DOWN Sep 9 05:17:48.050441 systemd-networkd[1427]: lxc_health: Lost carrier Sep 9 05:17:48.052357 containerd[1527]: time="2025-09-09T05:17:48.052316901Z" level=info msg="StopPodSandbox for \"737f20412084e44f8d4a6b75e21427bf05affb07b5048b35d3bd410d363c4600\"" Sep 9 05:17:48.060015 containerd[1527]: time="2025-09-09T05:17:48.059970715Z" level=info msg="Container to stop \"a03f3070f4366f117c93222b5f5f4e088165bf80be0a480b14099de569f0e8ab\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 05:17:48.066056 systemd[1]: cri-containerd-737f20412084e44f8d4a6b75e21427bf05affb07b5048b35d3bd410d363c4600.scope: Deactivated successfully. Sep 9 05:17:48.072144 systemd[1]: cri-containerd-6db6af2e09f018c8fb2da6e963ecb5da385ca64d5473c1f2379e8c28b578c355.scope: Deactivated successfully. Sep 9 05:17:48.072665 systemd[1]: cri-containerd-6db6af2e09f018c8fb2da6e963ecb5da385ca64d5473c1f2379e8c28b578c355.scope: Consumed 5.909s CPU time, 121.8M memory peak, 140K read from disk, 12.9M written to disk. Sep 9 05:17:48.073408 containerd[1527]: time="2025-09-09T05:17:48.073380041Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6db6af2e09f018c8fb2da6e963ecb5da385ca64d5473c1f2379e8c28b578c355\" id:\"6db6af2e09f018c8fb2da6e963ecb5da385ca64d5473c1f2379e8c28b578c355\" pid:3292 exited_at:{seconds:1757395068 nanos:73104872}" Sep 9 05:17:48.073448 containerd[1527]: time="2025-09-09T05:17:48.073408242Z" level=info msg="received exit event container_id:\"6db6af2e09f018c8fb2da6e963ecb5da385ca64d5473c1f2379e8c28b578c355\" id:\"6db6af2e09f018c8fb2da6e963ecb5da385ca64d5473c1f2379e8c28b578c355\" pid:3292 exited_at:{seconds:1757395068 nanos:73104872}" Sep 9 05:17:48.075517 containerd[1527]: time="2025-09-09T05:17:48.075458830Z" level=info msg="TaskExit event in podsandbox handler container_id:\"737f20412084e44f8d4a6b75e21427bf05affb07b5048b35d3bd410d363c4600\" id:\"737f20412084e44f8d4a6b75e21427bf05affb07b5048b35d3bd410d363c4600\" pid:2955 exit_status:137 exited_at:{seconds:1757395068 nanos:75215942}" Sep 9 05:17:48.093479 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6db6af2e09f018c8fb2da6e963ecb5da385ca64d5473c1f2379e8c28b578c355-rootfs.mount: Deactivated successfully. Sep 9 05:17:48.102601 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-737f20412084e44f8d4a6b75e21427bf05affb07b5048b35d3bd410d363c4600-rootfs.mount: Deactivated successfully. Sep 9 05:17:48.110170 containerd[1527]: time="2025-09-09T05:17:48.110135143Z" level=info msg="shim disconnected" id=737f20412084e44f8d4a6b75e21427bf05affb07b5048b35d3bd410d363c4600 namespace=k8s.io Sep 9 05:17:48.114188 containerd[1527]: time="2025-09-09T05:17:48.110165104Z" level=warning msg="cleaning up after shim disconnected" id=737f20412084e44f8d4a6b75e21427bf05affb07b5048b35d3bd410d363c4600 namespace=k8s.io Sep 9 05:17:48.114188 containerd[1527]: time="2025-09-09T05:17:48.114183718Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 05:17:48.114306 containerd[1527]: time="2025-09-09T05:17:48.113316129Z" level=info msg="StopContainer for \"6db6af2e09f018c8fb2da6e963ecb5da385ca64d5473c1f2379e8c28b578c355\" returns successfully" Sep 9 05:17:48.114792 containerd[1527]: time="2025-09-09T05:17:48.114754297Z" level=info msg="StopPodSandbox for \"0d98a81bf857de62962b5fdacb51bdfc91e354c7e965a446aa244a65def356c2\"" Sep 9 05:17:48.114859 containerd[1527]: time="2025-09-09T05:17:48.114845420Z" level=info msg="Container to stop \"6db6af2e09f018c8fb2da6e963ecb5da385ca64d5473c1f2379e8c28b578c355\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 05:17:48.114884 containerd[1527]: time="2025-09-09T05:17:48.114859781Z" level=info msg="Container to stop \"c3e3c80b2f8e19ea4b27ff1f30b668f62781632afd7e62d92ec24ddda9a4d49b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 05:17:48.114884 containerd[1527]: time="2025-09-09T05:17:48.114868301Z" level=info msg="Container to stop \"24db5f0b7792da9b9e6db58762c98db9273883852387902e07efc8a5b4acdbd2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 05:17:48.114884 containerd[1527]: time="2025-09-09T05:17:48.114876861Z" level=info msg="Container to stop \"aa9de719f1bfd7a5ea157f2476fa85e480fae8469e71439315543e449b3336e7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 05:17:48.114943 containerd[1527]: time="2025-09-09T05:17:48.114884861Z" level=info msg="Container to stop \"1668ff34c8e01e1256c21eac553c6acd49807db9c93aa47f69a80a372de428c9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 05:17:48.120001 systemd[1]: cri-containerd-0d98a81bf857de62962b5fdacb51bdfc91e354c7e965a446aa244a65def356c2.scope: Deactivated successfully. Sep 9 05:17:48.129430 containerd[1527]: time="2025-09-09T05:17:48.129366023Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0d98a81bf857de62962b5fdacb51bdfc91e354c7e965a446aa244a65def356c2\" id:\"0d98a81bf857de62962b5fdacb51bdfc91e354c7e965a446aa244a65def356c2\" pid:2830 exit_status:137 exited_at:{seconds:1757395068 nanos:124231612}" Sep 9 05:17:48.130084 containerd[1527]: time="2025-09-09T05:17:48.129628792Z" level=info msg="received exit event sandbox_id:\"737f20412084e44f8d4a6b75e21427bf05affb07b5048b35d3bd410d363c4600\" exit_status:137 exited_at:{seconds:1757395068 nanos:75215942}" Sep 9 05:17:48.131114 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-737f20412084e44f8d4a6b75e21427bf05affb07b5048b35d3bd410d363c4600-shm.mount: Deactivated successfully. Sep 9 05:17:48.131232 containerd[1527]: time="2025-09-09T05:17:48.129705194Z" level=info msg="TearDown network for sandbox \"737f20412084e44f8d4a6b75e21427bf05affb07b5048b35d3bd410d363c4600\" successfully" Sep 9 05:17:48.131232 containerd[1527]: time="2025-09-09T05:17:48.131194444Z" level=info msg="StopPodSandbox for \"737f20412084e44f8d4a6b75e21427bf05affb07b5048b35d3bd410d363c4600\" returns successfully" Sep 9 05:17:48.150521 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d98a81bf857de62962b5fdacb51bdfc91e354c7e965a446aa244a65def356c2-rootfs.mount: Deactivated successfully. Sep 9 05:17:48.156455 containerd[1527]: time="2025-09-09T05:17:48.156249597Z" level=info msg="shim disconnected" id=0d98a81bf857de62962b5fdacb51bdfc91e354c7e965a446aa244a65def356c2 namespace=k8s.io Sep 9 05:17:48.156455 containerd[1527]: time="2025-09-09T05:17:48.156282198Z" level=warning msg="cleaning up after shim disconnected" id=0d98a81bf857de62962b5fdacb51bdfc91e354c7e965a446aa244a65def356c2 namespace=k8s.io Sep 9 05:17:48.156455 containerd[1527]: time="2025-09-09T05:17:48.156312279Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 05:17:48.170576 containerd[1527]: time="2025-09-09T05:17:48.170469190Z" level=info msg="received exit event sandbox_id:\"0d98a81bf857de62962b5fdacb51bdfc91e354c7e965a446aa244a65def356c2\" exit_status:137 exited_at:{seconds:1757395068 nanos:124231612}" Sep 9 05:17:48.170797 containerd[1527]: time="2025-09-09T05:17:48.170612875Z" level=info msg="TearDown network for sandbox \"0d98a81bf857de62962b5fdacb51bdfc91e354c7e965a446aa244a65def356c2\" successfully" Sep 9 05:17:48.170797 containerd[1527]: time="2025-09-09T05:17:48.170789160Z" level=info msg="StopPodSandbox for \"0d98a81bf857de62962b5fdacb51bdfc91e354c7e965a446aa244a65def356c2\" returns successfully" Sep 9 05:17:48.254331 kubelet[2671]: I0909 05:17:48.254222 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb-cilium-cgroup\") pod \"2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb\" (UID: \"2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb\") " Sep 9 05:17:48.254331 kubelet[2671]: I0909 05:17:48.254270 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb-hubble-tls\") pod \"2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb\" (UID: \"2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb\") " Sep 9 05:17:48.254331 kubelet[2671]: I0909 05:17:48.254286 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb-host-proc-sys-net\") pod \"2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb\" (UID: \"2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb\") " Sep 9 05:17:48.254331 kubelet[2671]: I0909 05:17:48.254302 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb-host-proc-sys-kernel\") pod \"2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb\" (UID: \"2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb\") " Sep 9 05:17:48.254331 kubelet[2671]: I0909 05:17:48.254319 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a631756f-f623-4093-a7a3-dddf2be286f3-cilium-config-path\") pod \"a631756f-f623-4093-a7a3-dddf2be286f3\" (UID: \"a631756f-f623-4093-a7a3-dddf2be286f3\") " Sep 9 05:17:48.254331 kubelet[2671]: I0909 05:17:48.254336 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb-etc-cni-netd\") pod \"2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb\" (UID: \"2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb\") " Sep 9 05:17:48.254894 kubelet[2671]: I0909 05:17:48.254350 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb-bpf-maps\") pod \"2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb\" (UID: \"2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb\") " Sep 9 05:17:48.254894 kubelet[2671]: I0909 05:17:48.254368 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb-clustermesh-secrets\") pod \"2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb\" (UID: \"2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb\") " Sep 9 05:17:48.254894 kubelet[2671]: I0909 05:17:48.254383 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb-cilium-run\") pod \"2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb\" (UID: \"2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb\") " Sep 9 05:17:48.254894 kubelet[2671]: I0909 05:17:48.254398 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb-lib-modules\") pod \"2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb\" (UID: \"2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb\") " Sep 9 05:17:48.254894 kubelet[2671]: I0909 05:17:48.254411 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb-cni-path\") pod \"2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb\" (UID: \"2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb\") " Sep 9 05:17:48.254894 kubelet[2671]: I0909 05:17:48.254431 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb-cilium-config-path\") pod \"2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb\" (UID: \"2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb\") " Sep 9 05:17:48.255029 kubelet[2671]: I0909 05:17:48.254446 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnpbq\" (UniqueName: \"kubernetes.io/projected/2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb-kube-api-access-hnpbq\") pod \"2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb\" (UID: \"2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb\") " Sep 9 05:17:48.255029 kubelet[2671]: I0909 05:17:48.254461 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hllhz\" (UniqueName: \"kubernetes.io/projected/a631756f-f623-4093-a7a3-dddf2be286f3-kube-api-access-hllhz\") pod \"a631756f-f623-4093-a7a3-dddf2be286f3\" (UID: \"a631756f-f623-4093-a7a3-dddf2be286f3\") " Sep 9 05:17:48.255029 kubelet[2671]: I0909 05:17:48.254478 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb-hostproc\") pod \"2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb\" (UID: \"2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb\") " Sep 9 05:17:48.255029 kubelet[2671]: I0909 05:17:48.254494 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb-xtables-lock\") pod \"2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb\" (UID: \"2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb\") " Sep 9 05:17:48.259071 kubelet[2671]: I0909 05:17:48.259027 2671 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb" (UID: "2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 05:17:48.259155 kubelet[2671]: I0909 05:17:48.259101 2671 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb" (UID: "2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 05:17:48.259155 kubelet[2671]: I0909 05:17:48.259120 2671 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb" (UID: "2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 05:17:48.259155 kubelet[2671]: I0909 05:17:48.259133 2671 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb-cni-path" (OuterVolumeSpecName: "cni-path") pod "2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb" (UID: "2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 05:17:48.259155 kubelet[2671]: I0909 05:17:48.259150 2671 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb" (UID: "2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 05:17:48.259242 kubelet[2671]: I0909 05:17:48.259161 2671 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb" (UID: "2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 05:17:48.259242 kubelet[2671]: I0909 05:17:48.259206 2671 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb" (UID: "2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 05:17:48.259930 kubelet[2671]: I0909 05:17:48.259886 2671 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb" (UID: "2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 05:17:48.260300 kubelet[2671]: I0909 05:17:48.259955 2671 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb" (UID: "2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 9 05:17:48.260300 kubelet[2671]: I0909 05:17:48.260007 2671 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb" (UID: "2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 05:17:48.260740 kubelet[2671]: I0909 05:17:48.260704 2671 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a631756f-f623-4093-a7a3-dddf2be286f3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a631756f-f623-4093-a7a3-dddf2be286f3" (UID: "a631756f-f623-4093-a7a3-dddf2be286f3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 05:17:48.260792 kubelet[2671]: I0909 05:17:48.260751 2671 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb-hostproc" (OuterVolumeSpecName: "hostproc") pod "2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb" (UID: "2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 05:17:48.260792 kubelet[2671]: I0909 05:17:48.260769 2671 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb" (UID: "2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 05:17:48.262120 kubelet[2671]: I0909 05:17:48.262087 2671 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb" (UID: "2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 05:17:48.262120 kubelet[2671]: I0909 05:17:48.262108 2671 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a631756f-f623-4093-a7a3-dddf2be286f3-kube-api-access-hllhz" (OuterVolumeSpecName: "kube-api-access-hllhz") pod "a631756f-f623-4093-a7a3-dddf2be286f3" (UID: "a631756f-f623-4093-a7a3-dddf2be286f3"). InnerVolumeSpecName "kube-api-access-hllhz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 05:17:48.262203 kubelet[2671]: I0909 05:17:48.262180 2671 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb-kube-api-access-hnpbq" (OuterVolumeSpecName: "kube-api-access-hnpbq") pod "2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb" (UID: "2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb"). InnerVolumeSpecName "kube-api-access-hnpbq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 05:17:48.292712 kubelet[2671]: E0909 05:17:48.292676 2671 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 05:17:48.355290 kubelet[2671]: I0909 05:17:48.355255 2671 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 9 05:17:48.355290 kubelet[2671]: I0909 05:17:48.355282 2671 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 9 05:17:48.355290 kubelet[2671]: I0909 05:17:48.355293 2671 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 9 05:17:48.355407 kubelet[2671]: I0909 05:17:48.355301 2671 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 9 05:17:48.355407 kubelet[2671]: I0909 05:17:48.355312 2671 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 9 05:17:48.355407 kubelet[2671]: I0909 05:17:48.355319 2671 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a631756f-f623-4093-a7a3-dddf2be286f3-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 9 05:17:48.355407 kubelet[2671]: I0909 05:17:48.355327 2671 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 9 05:17:48.355407 kubelet[2671]: I0909 05:17:48.355335 2671 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 9 05:17:48.355407 kubelet[2671]: I0909 05:17:48.355342 2671 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 9 05:17:48.355407 kubelet[2671]: I0909 05:17:48.355349 2671 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 9 05:17:48.355407 kubelet[2671]: I0909 05:17:48.355356 2671 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 9 05:17:48.355609 kubelet[2671]: I0909 05:17:48.355364 2671 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 9 05:17:48.355609 kubelet[2671]: I0909 05:17:48.355371 2671 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 9 05:17:48.355609 kubelet[2671]: I0909 05:17:48.355379 2671 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hnpbq\" (UniqueName: \"kubernetes.io/projected/2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb-kube-api-access-hnpbq\") on node \"localhost\" DevicePath \"\"" Sep 9 05:17:48.355609 kubelet[2671]: I0909 05:17:48.355387 2671 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hllhz\" (UniqueName: \"kubernetes.io/projected/a631756f-f623-4093-a7a3-dddf2be286f3-kube-api-access-hllhz\") on node \"localhost\" DevicePath \"\"" Sep 9 05:17:48.355609 kubelet[2671]: I0909 05:17:48.355394 2671 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 9 05:17:48.432907 kubelet[2671]: I0909 05:17:48.432871 2671 scope.go:117] "RemoveContainer" containerID="6db6af2e09f018c8fb2da6e963ecb5da385ca64d5473c1f2379e8c28b578c355" Sep 9 05:17:48.436437 containerd[1527]: time="2025-09-09T05:17:48.435786412Z" level=info msg="RemoveContainer for \"6db6af2e09f018c8fb2da6e963ecb5da385ca64d5473c1f2379e8c28b578c355\"" Sep 9 05:17:48.438499 systemd[1]: Removed slice kubepods-burstable-pod2fd851e2_54c8_4c6d_8b3e_0ccbef1b90fb.slice - libcontainer container kubepods-burstable-pod2fd851e2_54c8_4c6d_8b3e_0ccbef1b90fb.slice. Sep 9 05:17:48.438601 systemd[1]: kubepods-burstable-pod2fd851e2_54c8_4c6d_8b3e_0ccbef1b90fb.slice: Consumed 5.993s CPU time, 122.2M memory peak, 152K read from disk, 12.9M written to disk. Sep 9 05:17:48.443032 systemd[1]: Removed slice kubepods-besteffort-poda631756f_f623_4093_a7a3_dddf2be286f3.slice - libcontainer container kubepods-besteffort-poda631756f_f623_4093_a7a3_dddf2be286f3.slice. Sep 9 05:17:48.443705 containerd[1527]: time="2025-09-09T05:17:48.443678355Z" level=info msg="RemoveContainer for \"6db6af2e09f018c8fb2da6e963ecb5da385ca64d5473c1f2379e8c28b578c355\" returns successfully" Sep 9 05:17:48.444066 kubelet[2671]: I0909 05:17:48.444023 2671 scope.go:117] "RemoveContainer" containerID="1668ff34c8e01e1256c21eac553c6acd49807db9c93aa47f69a80a372de428c9" Sep 9 05:17:48.445606 containerd[1527]: time="2025-09-09T05:17:48.445579178Z" level=info msg="RemoveContainer for \"1668ff34c8e01e1256c21eac553c6acd49807db9c93aa47f69a80a372de428c9\"" Sep 9 05:17:48.450136 containerd[1527]: time="2025-09-09T05:17:48.450108449Z" level=info msg="RemoveContainer for \"1668ff34c8e01e1256c21eac553c6acd49807db9c93aa47f69a80a372de428c9\" returns successfully" Sep 9 05:17:48.450532 kubelet[2671]: I0909 05:17:48.450285 2671 scope.go:117] "RemoveContainer" containerID="aa9de719f1bfd7a5ea157f2476fa85e480fae8469e71439315543e449b3336e7" Sep 9 05:17:48.459844 containerd[1527]: time="2025-09-09T05:17:48.456939916Z" level=info msg="RemoveContainer for \"aa9de719f1bfd7a5ea157f2476fa85e480fae8469e71439315543e449b3336e7\"" Sep 9 05:17:48.463204 containerd[1527]: time="2025-09-09T05:17:48.463039759Z" level=info msg="RemoveContainer for \"aa9de719f1bfd7a5ea157f2476fa85e480fae8469e71439315543e449b3336e7\" returns successfully" Sep 9 05:17:48.466152 kubelet[2671]: I0909 05:17:48.466128 2671 scope.go:117] "RemoveContainer" containerID="24db5f0b7792da9b9e6db58762c98db9273883852387902e07efc8a5b4acdbd2" Sep 9 05:17:48.468835 containerd[1527]: time="2025-09-09T05:17:48.467536308Z" level=info msg="RemoveContainer for \"24db5f0b7792da9b9e6db58762c98db9273883852387902e07efc8a5b4acdbd2\"" Sep 9 05:17:48.473903 containerd[1527]: time="2025-09-09T05:17:48.473875039Z" level=info msg="RemoveContainer for \"24db5f0b7792da9b9e6db58762c98db9273883852387902e07efc8a5b4acdbd2\" returns successfully" Sep 9 05:17:48.474233 kubelet[2671]: I0909 05:17:48.474207 2671 scope.go:117] "RemoveContainer" containerID="c3e3c80b2f8e19ea4b27ff1f30b668f62781632afd7e62d92ec24ddda9a4d49b" Sep 9 05:17:48.477621 containerd[1527]: time="2025-09-09T05:17:48.477575642Z" level=info msg="RemoveContainer for \"c3e3c80b2f8e19ea4b27ff1f30b668f62781632afd7e62d92ec24ddda9a4d49b\"" Sep 9 05:17:48.501574 containerd[1527]: time="2025-09-09T05:17:48.501497518Z" level=info msg="RemoveContainer for \"c3e3c80b2f8e19ea4b27ff1f30b668f62781632afd7e62d92ec24ddda9a4d49b\" returns successfully" Sep 9 05:17:48.501944 kubelet[2671]: I0909 05:17:48.501923 2671 scope.go:117] "RemoveContainer" containerID="6db6af2e09f018c8fb2da6e963ecb5da385ca64d5473c1f2379e8c28b578c355" Sep 9 05:17:48.502326 containerd[1527]: time="2025-09-09T05:17:48.502222062Z" level=error msg="ContainerStatus for \"6db6af2e09f018c8fb2da6e963ecb5da385ca64d5473c1f2379e8c28b578c355\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6db6af2e09f018c8fb2da6e963ecb5da385ca64d5473c1f2379e8c28b578c355\": not found" Sep 9 05:17:48.505675 kubelet[2671]: E0909 05:17:48.504898 2671 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6db6af2e09f018c8fb2da6e963ecb5da385ca64d5473c1f2379e8c28b578c355\": not found" containerID="6db6af2e09f018c8fb2da6e963ecb5da385ca64d5473c1f2379e8c28b578c355" Sep 9 05:17:48.505675 kubelet[2671]: I0909 05:17:48.505600 2671 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6db6af2e09f018c8fb2da6e963ecb5da385ca64d5473c1f2379e8c28b578c355"} err="failed to get container status \"6db6af2e09f018c8fb2da6e963ecb5da385ca64d5473c1f2379e8c28b578c355\": rpc error: code = NotFound desc = an error occurred when try to find container \"6db6af2e09f018c8fb2da6e963ecb5da385ca64d5473c1f2379e8c28b578c355\": not found" Sep 9 05:17:48.505675 kubelet[2671]: I0909 05:17:48.505658 2671 scope.go:117] "RemoveContainer" containerID="1668ff34c8e01e1256c21eac553c6acd49807db9c93aa47f69a80a372de428c9" Sep 9 05:17:48.507528 containerd[1527]: time="2025-09-09T05:17:48.506892817Z" level=error msg="ContainerStatus for \"1668ff34c8e01e1256c21eac553c6acd49807db9c93aa47f69a80a372de428c9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1668ff34c8e01e1256c21eac553c6acd49807db9c93aa47f69a80a372de428c9\": not found" Sep 9 05:17:48.507933 kubelet[2671]: E0909 05:17:48.507896 2671 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1668ff34c8e01e1256c21eac553c6acd49807db9c93aa47f69a80a372de428c9\": not found" containerID="1668ff34c8e01e1256c21eac553c6acd49807db9c93aa47f69a80a372de428c9" Sep 9 05:17:48.508129 kubelet[2671]: I0909 05:17:48.507932 2671 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1668ff34c8e01e1256c21eac553c6acd49807db9c93aa47f69a80a372de428c9"} err="failed to get container status \"1668ff34c8e01e1256c21eac553c6acd49807db9c93aa47f69a80a372de428c9\": rpc error: code = NotFound desc = an error occurred when try to find container \"1668ff34c8e01e1256c21eac553c6acd49807db9c93aa47f69a80a372de428c9\": not found" Sep 9 05:17:48.508129 kubelet[2671]: I0909 05:17:48.507949 2671 scope.go:117] "RemoveContainer" containerID="aa9de719f1bfd7a5ea157f2476fa85e480fae8469e71439315543e449b3336e7" Sep 9 05:17:48.508224 containerd[1527]: time="2025-09-09T05:17:48.508115138Z" level=error msg="ContainerStatus for \"aa9de719f1bfd7a5ea157f2476fa85e480fae8469e71439315543e449b3336e7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aa9de719f1bfd7a5ea157f2476fa85e480fae8469e71439315543e449b3336e7\": not found" Sep 9 05:17:48.508641 kubelet[2671]: E0909 05:17:48.508607 2671 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"aa9de719f1bfd7a5ea157f2476fa85e480fae8469e71439315543e449b3336e7\": not found" containerID="aa9de719f1bfd7a5ea157f2476fa85e480fae8469e71439315543e449b3336e7" Sep 9 05:17:48.508641 kubelet[2671]: I0909 05:17:48.508634 2671 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"aa9de719f1bfd7a5ea157f2476fa85e480fae8469e71439315543e449b3336e7"} err="failed to get container status \"aa9de719f1bfd7a5ea157f2476fa85e480fae8469e71439315543e449b3336e7\": rpc error: code = NotFound desc = an error occurred when try to find container \"aa9de719f1bfd7a5ea157f2476fa85e480fae8469e71439315543e449b3336e7\": not found" Sep 9 05:17:48.509024 kubelet[2671]: I0909 05:17:48.508650 2671 scope.go:117] "RemoveContainer" containerID="24db5f0b7792da9b9e6db58762c98db9273883852387902e07efc8a5b4acdbd2" Sep 9 05:17:48.509024 kubelet[2671]: E0909 05:17:48.508961 2671 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"24db5f0b7792da9b9e6db58762c98db9273883852387902e07efc8a5b4acdbd2\": not found" containerID="24db5f0b7792da9b9e6db58762c98db9273883852387902e07efc8a5b4acdbd2" Sep 9 05:17:48.509341 containerd[1527]: time="2025-09-09T05:17:48.508774439Z" level=error msg="ContainerStatus for \"24db5f0b7792da9b9e6db58762c98db9273883852387902e07efc8a5b4acdbd2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"24db5f0b7792da9b9e6db58762c98db9273883852387902e07efc8a5b4acdbd2\": not found" Sep 9 05:17:48.509341 containerd[1527]: time="2025-09-09T05:17:48.509260136Z" level=error msg="ContainerStatus for \"c3e3c80b2f8e19ea4b27ff1f30b668f62781632afd7e62d92ec24ddda9a4d49b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c3e3c80b2f8e19ea4b27ff1f30b668f62781632afd7e62d92ec24ddda9a4d49b\": not found" Sep 9 05:17:48.509849 kubelet[2671]: I0909 05:17:48.509027 2671 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"24db5f0b7792da9b9e6db58762c98db9273883852387902e07efc8a5b4acdbd2"} err="failed to get container status \"24db5f0b7792da9b9e6db58762c98db9273883852387902e07efc8a5b4acdbd2\": rpc error: code = NotFound desc = an error occurred when try to find container \"24db5f0b7792da9b9e6db58762c98db9273883852387902e07efc8a5b4acdbd2\": not found" Sep 9 05:17:48.509849 kubelet[2671]: I0909 05:17:48.509044 2671 scope.go:117] "RemoveContainer" containerID="c3e3c80b2f8e19ea4b27ff1f30b668f62781632afd7e62d92ec24ddda9a4d49b" Sep 9 05:17:48.509849 kubelet[2671]: E0909 05:17:48.509355 2671 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c3e3c80b2f8e19ea4b27ff1f30b668f62781632afd7e62d92ec24ddda9a4d49b\": not found" containerID="c3e3c80b2f8e19ea4b27ff1f30b668f62781632afd7e62d92ec24ddda9a4d49b" Sep 9 05:17:48.509849 kubelet[2671]: I0909 05:17:48.509376 2671 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c3e3c80b2f8e19ea4b27ff1f30b668f62781632afd7e62d92ec24ddda9a4d49b"} err="failed to get container status \"c3e3c80b2f8e19ea4b27ff1f30b668f62781632afd7e62d92ec24ddda9a4d49b\": rpc error: code = NotFound desc = an error occurred when try to find container \"c3e3c80b2f8e19ea4b27ff1f30b668f62781632afd7e62d92ec24ddda9a4d49b\": not found" Sep 9 05:17:48.509849 kubelet[2671]: I0909 05:17:48.509390 2671 scope.go:117] "RemoveContainer" containerID="a03f3070f4366f117c93222b5f5f4e088165bf80be0a480b14099de569f0e8ab" Sep 9 05:17:48.511091 containerd[1527]: time="2025-09-09T05:17:48.511068356Z" level=info msg="RemoveContainer for \"a03f3070f4366f117c93222b5f5f4e088165bf80be0a480b14099de569f0e8ab\"" Sep 9 05:17:48.513808 containerd[1527]: time="2025-09-09T05:17:48.513782326Z" level=info msg="RemoveContainer for \"a03f3070f4366f117c93222b5f5f4e088165bf80be0a480b14099de569f0e8ab\" returns successfully" Sep 9 05:17:48.514018 kubelet[2671]: I0909 05:17:48.513984 2671 scope.go:117] "RemoveContainer" containerID="a03f3070f4366f117c93222b5f5f4e088165bf80be0a480b14099de569f0e8ab" Sep 9 05:17:48.514180 containerd[1527]: time="2025-09-09T05:17:48.514153418Z" level=error msg="ContainerStatus for \"a03f3070f4366f117c93222b5f5f4e088165bf80be0a480b14099de569f0e8ab\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a03f3070f4366f117c93222b5f5f4e088165bf80be0a480b14099de569f0e8ab\": not found" Sep 9 05:17:48.514295 kubelet[2671]: E0909 05:17:48.514273 2671 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a03f3070f4366f117c93222b5f5f4e088165bf80be0a480b14099de569f0e8ab\": not found" containerID="a03f3070f4366f117c93222b5f5f4e088165bf80be0a480b14099de569f0e8ab" Sep 9 05:17:48.514336 kubelet[2671]: I0909 05:17:48.514299 2671 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a03f3070f4366f117c93222b5f5f4e088165bf80be0a480b14099de569f0e8ab"} err="failed to get container status \"a03f3070f4366f117c93222b5f5f4e088165bf80be0a480b14099de569f0e8ab\": rpc error: code = NotFound desc = an error occurred when try to find container \"a03f3070f4366f117c93222b5f5f4e088165bf80be0a480b14099de569f0e8ab\": not found" Sep 9 05:17:49.035920 systemd[1]: var-lib-kubelet-pods-a631756f\x2df623\x2d4093\x2da7a3\x2ddddf2be286f3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhllhz.mount: Deactivated successfully. Sep 9 05:17:49.036035 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0d98a81bf857de62962b5fdacb51bdfc91e354c7e965a446aa244a65def356c2-shm.mount: Deactivated successfully. Sep 9 05:17:49.036093 systemd[1]: var-lib-kubelet-pods-2fd851e2\x2d54c8\x2d4c6d\x2d8b3e\x2d0ccbef1b90fb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhnpbq.mount: Deactivated successfully. Sep 9 05:17:49.036148 systemd[1]: var-lib-kubelet-pods-2fd851e2\x2d54c8\x2d4c6d\x2d8b3e\x2d0ccbef1b90fb-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 9 05:17:49.036192 systemd[1]: var-lib-kubelet-pods-2fd851e2\x2d54c8\x2d4c6d\x2d8b3e\x2d0ccbef1b90fb-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 9 05:17:49.239336 kubelet[2671]: I0909 05:17:49.239282 2671 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb" path="/var/lib/kubelet/pods/2fd851e2-54c8-4c6d-8b3e-0ccbef1b90fb/volumes" Sep 9 05:17:49.239820 kubelet[2671]: I0909 05:17:49.239788 2671 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a631756f-f623-4093-a7a3-dddf2be286f3" path="/var/lib/kubelet/pods/a631756f-f623-4093-a7a3-dddf2be286f3/volumes" Sep 9 05:17:49.427454 containerd[1527]: time="2025-09-09T05:17:49.427392827Z" level=info msg="TaskExit event in podsandbox handler exit_status:137 exited_at:{seconds:1757395068 nanos:75215942}" Sep 9 05:17:49.961856 sshd[4245]: Connection closed by 10.0.0.1 port 33080 Sep 9 05:17:49.962124 sshd-session[4242]: pam_unix(sshd:session): session closed for user core Sep 9 05:17:49.969757 systemd[1]: sshd@20-10.0.0.147:22-10.0.0.1:33080.service: Deactivated successfully. Sep 9 05:17:49.972185 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 05:17:49.972437 systemd[1]: session-21.scope: Consumed 1.177s CPU time, 24.4M memory peak. Sep 9 05:17:49.973587 systemd-logind[1509]: Session 21 logged out. Waiting for processes to exit. Sep 9 05:17:49.975154 systemd[1]: Started sshd@21-10.0.0.147:22-10.0.0.1:60154.service - OpenSSH per-connection server daemon (10.0.0.1:60154). Sep 9 05:17:49.976216 systemd-logind[1509]: Removed session 21. Sep 9 05:17:50.027874 sshd[4406]: Accepted publickey for core from 10.0.0.1 port 60154 ssh2: RSA SHA256:BZm90Ok3j8HCXtlwShuWuMQDPsEE0kFrFWmP82ap/wE Sep 9 05:17:50.029079 sshd-session[4406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:17:50.033119 systemd-logind[1509]: New session 22 of user core. Sep 9 05:17:50.044953 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 9 05:17:50.923483 sshd[4409]: Connection closed by 10.0.0.1 port 60154 Sep 9 05:17:50.924117 sshd-session[4406]: pam_unix(sshd:session): session closed for user core Sep 9 05:17:50.933215 systemd[1]: sshd@21-10.0.0.147:22-10.0.0.1:60154.service: Deactivated successfully. Sep 9 05:17:50.940430 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 05:17:50.943090 systemd-logind[1509]: Session 22 logged out. Waiting for processes to exit. Sep 9 05:17:50.947233 systemd[1]: Started sshd@22-10.0.0.147:22-10.0.0.1:60162.service - OpenSSH per-connection server daemon (10.0.0.1:60162). Sep 9 05:17:50.950396 systemd-logind[1509]: Removed session 22. Sep 9 05:17:50.963992 systemd[1]: Created slice kubepods-burstable-pod092b05a8_37b7_4699_b944_4633950c6b20.slice - libcontainer container kubepods-burstable-pod092b05a8_37b7_4699_b944_4633950c6b20.slice. Sep 9 05:17:51.006005 sshd[4421]: Accepted publickey for core from 10.0.0.1 port 60162 ssh2: RSA SHA256:BZm90Ok3j8HCXtlwShuWuMQDPsEE0kFrFWmP82ap/wE Sep 9 05:17:51.007048 sshd-session[4421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:17:51.010648 systemd-logind[1509]: New session 23 of user core. Sep 9 05:17:51.020023 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 9 05:17:51.068840 sshd[4424]: Connection closed by 10.0.0.1 port 60162 Sep 9 05:17:51.069114 sshd-session[4421]: pam_unix(sshd:session): session closed for user core Sep 9 05:17:51.070411 kubelet[2671]: I0909 05:17:51.070326 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/092b05a8-37b7-4699-b944-4633950c6b20-hostproc\") pod \"cilium-7km5n\" (UID: \"092b05a8-37b7-4699-b944-4633950c6b20\") " pod="kube-system/cilium-7km5n" Sep 9 05:17:51.070411 kubelet[2671]: I0909 05:17:51.070363 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/092b05a8-37b7-4699-b944-4633950c6b20-cilium-ipsec-secrets\") pod \"cilium-7km5n\" (UID: \"092b05a8-37b7-4699-b944-4633950c6b20\") " pod="kube-system/cilium-7km5n" Sep 9 05:17:51.070411 kubelet[2671]: I0909 05:17:51.070383 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/092b05a8-37b7-4699-b944-4633950c6b20-cilium-run\") pod \"cilium-7km5n\" (UID: \"092b05a8-37b7-4699-b944-4633950c6b20\") " pod="kube-system/cilium-7km5n" Sep 9 05:17:51.070898 kubelet[2671]: I0909 05:17:51.070397 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/092b05a8-37b7-4699-b944-4633950c6b20-etc-cni-netd\") pod \"cilium-7km5n\" (UID: \"092b05a8-37b7-4699-b944-4633950c6b20\") " pod="kube-system/cilium-7km5n" Sep 9 05:17:51.070898 kubelet[2671]: I0909 05:17:51.070768 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/092b05a8-37b7-4699-b944-4633950c6b20-host-proc-sys-net\") pod \"cilium-7km5n\" (UID: \"092b05a8-37b7-4699-b944-4633950c6b20\") " pod="kube-system/cilium-7km5n" Sep 9 05:17:51.070898 kubelet[2671]: I0909 05:17:51.070785 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/092b05a8-37b7-4699-b944-4633950c6b20-hubble-tls\") pod \"cilium-7km5n\" (UID: \"092b05a8-37b7-4699-b944-4633950c6b20\") " pod="kube-system/cilium-7km5n" Sep 9 05:17:51.074455 kubelet[2671]: I0909 05:17:51.070800 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/092b05a8-37b7-4699-b944-4633950c6b20-cni-path\") pod \"cilium-7km5n\" (UID: \"092b05a8-37b7-4699-b944-4633950c6b20\") " pod="kube-system/cilium-7km5n" Sep 9 05:17:51.074583 kubelet[2671]: I0909 05:17:51.074565 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/092b05a8-37b7-4699-b944-4633950c6b20-bpf-maps\") pod \"cilium-7km5n\" (UID: \"092b05a8-37b7-4699-b944-4633950c6b20\") " pod="kube-system/cilium-7km5n" Sep 9 05:17:51.074660 kubelet[2671]: I0909 05:17:51.074648 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/092b05a8-37b7-4699-b944-4633950c6b20-host-proc-sys-kernel\") pod \"cilium-7km5n\" (UID: \"092b05a8-37b7-4699-b944-4633950c6b20\") " pod="kube-system/cilium-7km5n" Sep 9 05:17:51.074733 kubelet[2671]: I0909 05:17:51.074718 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/092b05a8-37b7-4699-b944-4633950c6b20-cilium-cgroup\") pod \"cilium-7km5n\" (UID: \"092b05a8-37b7-4699-b944-4633950c6b20\") " pod="kube-system/cilium-7km5n" Sep 9 05:17:51.074805 kubelet[2671]: I0909 05:17:51.074794 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/092b05a8-37b7-4699-b944-4633950c6b20-cilium-config-path\") pod \"cilium-7km5n\" (UID: \"092b05a8-37b7-4699-b944-4633950c6b20\") " pod="kube-system/cilium-7km5n" Sep 9 05:17:51.074893 kubelet[2671]: I0909 05:17:51.074880 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/092b05a8-37b7-4699-b944-4633950c6b20-clustermesh-secrets\") pod \"cilium-7km5n\" (UID: \"092b05a8-37b7-4699-b944-4633950c6b20\") " pod="kube-system/cilium-7km5n" Sep 9 05:17:51.075007 kubelet[2671]: I0909 05:17:51.074974 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/092b05a8-37b7-4699-b944-4633950c6b20-xtables-lock\") pod \"cilium-7km5n\" (UID: \"092b05a8-37b7-4699-b944-4633950c6b20\") " pod="kube-system/cilium-7km5n" Sep 9 05:17:51.075037 kubelet[2671]: I0909 05:17:51.075008 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/092b05a8-37b7-4699-b944-4633950c6b20-lib-modules\") pod \"cilium-7km5n\" (UID: \"092b05a8-37b7-4699-b944-4633950c6b20\") " pod="kube-system/cilium-7km5n" Sep 9 05:17:51.075037 kubelet[2671]: I0909 05:17:51.075025 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg62x\" (UniqueName: \"kubernetes.io/projected/092b05a8-37b7-4699-b944-4633950c6b20-kube-api-access-wg62x\") pod \"cilium-7km5n\" (UID: \"092b05a8-37b7-4699-b944-4633950c6b20\") " pod="kube-system/cilium-7km5n" Sep 9 05:17:51.084795 systemd[1]: sshd@22-10.0.0.147:22-10.0.0.1:60162.service: Deactivated successfully. Sep 9 05:17:51.086425 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 05:17:51.087128 systemd-logind[1509]: Session 23 logged out. Waiting for processes to exit. Sep 9 05:17:51.089366 systemd[1]: Started sshd@23-10.0.0.147:22-10.0.0.1:60178.service - OpenSSH per-connection server daemon (10.0.0.1:60178). Sep 9 05:17:51.089954 systemd-logind[1509]: Removed session 23. Sep 9 05:17:51.140017 sshd[4431]: Accepted publickey for core from 10.0.0.1 port 60178 ssh2: RSA SHA256:BZm90Ok3j8HCXtlwShuWuMQDPsEE0kFrFWmP82ap/wE Sep 9 05:17:51.141146 sshd-session[4431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:17:51.144961 systemd-logind[1509]: New session 24 of user core. Sep 9 05:17:51.156059 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 9 05:17:51.268359 containerd[1527]: time="2025-09-09T05:17:51.268258129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7km5n,Uid:092b05a8-37b7-4699-b944-4633950c6b20,Namespace:kube-system,Attempt:0,}" Sep 9 05:17:51.280959 containerd[1527]: time="2025-09-09T05:17:51.280916776Z" level=info msg="connecting to shim 189367f192365624910a9170331c4f13d2dca4d84bc588b79135bb9cd194a4f2" address="unix:///run/containerd/s/0d10301571c5ce7ab83ef19bd6e67f174a2938f6618b7e554346892536fad006" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:17:51.311004 systemd[1]: Started cri-containerd-189367f192365624910a9170331c4f13d2dca4d84bc588b79135bb9cd194a4f2.scope - libcontainer container 189367f192365624910a9170331c4f13d2dca4d84bc588b79135bb9cd194a4f2. Sep 9 05:17:51.330847 containerd[1527]: time="2025-09-09T05:17:51.330749069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7km5n,Uid:092b05a8-37b7-4699-b944-4633950c6b20,Namespace:kube-system,Attempt:0,} returns sandbox id \"189367f192365624910a9170331c4f13d2dca4d84bc588b79135bb9cd194a4f2\"" Sep 9 05:17:51.338838 containerd[1527]: time="2025-09-09T05:17:51.337140111Z" level=info msg="CreateContainer within sandbox \"189367f192365624910a9170331c4f13d2dca4d84bc588b79135bb9cd194a4f2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 05:17:51.345577 containerd[1527]: time="2025-09-09T05:17:51.345547370Z" level=info msg="Container 3dbca263d5ab819070537544377d46079c855379b06c150cb7f692fd11c450cd: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:17:51.350190 containerd[1527]: time="2025-09-09T05:17:51.350158227Z" level=info msg="CreateContainer within sandbox \"189367f192365624910a9170331c4f13d2dca4d84bc588b79135bb9cd194a4f2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3dbca263d5ab819070537544377d46079c855379b06c150cb7f692fd11c450cd\"" Sep 9 05:17:51.350578 containerd[1527]: time="2025-09-09T05:17:51.350558414Z" level=info msg="StartContainer for \"3dbca263d5ab819070537544377d46079c855379b06c150cb7f692fd11c450cd\"" Sep 9 05:17:51.351280 containerd[1527]: time="2025-09-09T05:17:51.351254113Z" level=info msg="connecting to shim 3dbca263d5ab819070537544377d46079c855379b06c150cb7f692fd11c450cd" address="unix:///run/containerd/s/0d10301571c5ce7ab83ef19bd6e67f174a2938f6618b7e554346892536fad006" protocol=ttrpc version=3 Sep 9 05:17:51.370040 systemd[1]: Started cri-containerd-3dbca263d5ab819070537544377d46079c855379b06c150cb7f692fd11c450cd.scope - libcontainer container 3dbca263d5ab819070537544377d46079c855379b06c150cb7f692fd11c450cd. Sep 9 05:17:51.392981 containerd[1527]: time="2025-09-09T05:17:51.392949419Z" level=info msg="StartContainer for \"3dbca263d5ab819070537544377d46079c855379b06c150cb7f692fd11c450cd\" returns successfully" Sep 9 05:17:51.400498 systemd[1]: cri-containerd-3dbca263d5ab819070537544377d46079c855379b06c150cb7f692fd11c450cd.scope: Deactivated successfully. Sep 9 05:17:51.403022 containerd[1527]: time="2025-09-09T05:17:51.402992267Z" level=info msg="received exit event container_id:\"3dbca263d5ab819070537544377d46079c855379b06c150cb7f692fd11c450cd\" id:\"3dbca263d5ab819070537544377d46079c855379b06c150cb7f692fd11c450cd\" pid:4504 exited_at:{seconds:1757395071 nanos:402767994}" Sep 9 05:17:51.403123 containerd[1527]: time="2025-09-09T05:17:51.403091544Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3dbca263d5ab819070537544377d46079c855379b06c150cb7f692fd11c450cd\" id:\"3dbca263d5ab819070537544377d46079c855379b06c150cb7f692fd11c450cd\" pid:4504 exited_at:{seconds:1757395071 nanos:402767994}" Sep 9 05:17:51.448866 containerd[1527]: time="2025-09-09T05:17:51.448787005Z" level=info msg="CreateContainer within sandbox \"189367f192365624910a9170331c4f13d2dca4d84bc588b79135bb9cd194a4f2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 05:17:51.454507 containerd[1527]: time="2025-09-09T05:17:51.454476629Z" level=info msg="Container 5cbef320cfb8293d1a72a3cd8a28c8c7809c98e97047c93e162764501a344642: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:17:51.461802 containerd[1527]: time="2025-09-09T05:17:51.461768402Z" level=info msg="CreateContainer within sandbox \"189367f192365624910a9170331c4f13d2dca4d84bc588b79135bb9cd194a4f2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5cbef320cfb8293d1a72a3cd8a28c8c7809c98e97047c93e162764501a344642\"" Sep 9 05:17:51.462479 containerd[1527]: time="2025-09-09T05:17:51.462458061Z" level=info msg="StartContainer for \"5cbef320cfb8293d1a72a3cd8a28c8c7809c98e97047c93e162764501a344642\"" Sep 9 05:17:51.463429 containerd[1527]: time="2025-09-09T05:17:51.463369993Z" level=info msg="connecting to shim 5cbef320cfb8293d1a72a3cd8a28c8c7809c98e97047c93e162764501a344642" address="unix:///run/containerd/s/0d10301571c5ce7ab83ef19bd6e67f174a2938f6618b7e554346892536fad006" protocol=ttrpc version=3 Sep 9 05:17:51.482962 systemd[1]: Started cri-containerd-5cbef320cfb8293d1a72a3cd8a28c8c7809c98e97047c93e162764501a344642.scope - libcontainer container 5cbef320cfb8293d1a72a3cd8a28c8c7809c98e97047c93e162764501a344642. Sep 9 05:17:51.504796 containerd[1527]: time="2025-09-09T05:17:51.504761748Z" level=info msg="StartContainer for \"5cbef320cfb8293d1a72a3cd8a28c8c7809c98e97047c93e162764501a344642\" returns successfully" Sep 9 05:17:51.511938 systemd[1]: cri-containerd-5cbef320cfb8293d1a72a3cd8a28c8c7809c98e97047c93e162764501a344642.scope: Deactivated successfully. Sep 9 05:17:51.513251 containerd[1527]: time="2025-09-09T05:17:51.513216565Z" level=info msg="received exit event container_id:\"5cbef320cfb8293d1a72a3cd8a28c8c7809c98e97047c93e162764501a344642\" id:\"5cbef320cfb8293d1a72a3cd8a28c8c7809c98e97047c93e162764501a344642\" pid:4549 exited_at:{seconds:1757395071 nanos:513058690}" Sep 9 05:17:51.513750 containerd[1527]: time="2025-09-09T05:17:51.513723390Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5cbef320cfb8293d1a72a3cd8a28c8c7809c98e97047c93e162764501a344642\" id:\"5cbef320cfb8293d1a72a3cd8a28c8c7809c98e97047c93e162764501a344642\" pid:4549 exited_at:{seconds:1757395071 nanos:513058690}" Sep 9 05:17:52.180207 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount674700389.mount: Deactivated successfully. Sep 9 05:17:52.450267 containerd[1527]: time="2025-09-09T05:17:52.450184947Z" level=info msg="CreateContainer within sandbox \"189367f192365624910a9170331c4f13d2dca4d84bc588b79135bb9cd194a4f2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 05:17:52.460039 containerd[1527]: time="2025-09-09T05:17:52.458918170Z" level=info msg="Container 6c116a2940327bcc89b9aedaff0134ed4f8218a2ff88a72bfa33bd104fbe1e76: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:17:52.466744 containerd[1527]: time="2025-09-09T05:17:52.466685542Z" level=info msg="CreateContainer within sandbox \"189367f192365624910a9170331c4f13d2dca4d84bc588b79135bb9cd194a4f2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6c116a2940327bcc89b9aedaff0134ed4f8218a2ff88a72bfa33bd104fbe1e76\"" Sep 9 05:17:52.467152 containerd[1527]: time="2025-09-09T05:17:52.467127849Z" level=info msg="StartContainer for \"6c116a2940327bcc89b9aedaff0134ed4f8218a2ff88a72bfa33bd104fbe1e76\"" Sep 9 05:17:52.469545 containerd[1527]: time="2025-09-09T05:17:52.469518179Z" level=info msg="connecting to shim 6c116a2940327bcc89b9aedaff0134ed4f8218a2ff88a72bfa33bd104fbe1e76" address="unix:///run/containerd/s/0d10301571c5ce7ab83ef19bd6e67f174a2938f6618b7e554346892536fad006" protocol=ttrpc version=3 Sep 9 05:17:52.492962 systemd[1]: Started cri-containerd-6c116a2940327bcc89b9aedaff0134ed4f8218a2ff88a72bfa33bd104fbe1e76.scope - libcontainer container 6c116a2940327bcc89b9aedaff0134ed4f8218a2ff88a72bfa33bd104fbe1e76. Sep 9 05:17:52.527595 containerd[1527]: time="2025-09-09T05:17:52.527551994Z" level=info msg="StartContainer for \"6c116a2940327bcc89b9aedaff0134ed4f8218a2ff88a72bfa33bd104fbe1e76\" returns successfully" Sep 9 05:17:52.530268 systemd[1]: cri-containerd-6c116a2940327bcc89b9aedaff0134ed4f8218a2ff88a72bfa33bd104fbe1e76.scope: Deactivated successfully. Sep 9 05:17:52.532061 containerd[1527]: time="2025-09-09T05:17:52.531938185Z" level=info msg="received exit event container_id:\"6c116a2940327bcc89b9aedaff0134ed4f8218a2ff88a72bfa33bd104fbe1e76\" id:\"6c116a2940327bcc89b9aedaff0134ed4f8218a2ff88a72bfa33bd104fbe1e76\" pid:4594 exited_at:{seconds:1757395072 nanos:531684113}" Sep 9 05:17:52.533000 containerd[1527]: time="2025-09-09T05:17:52.532950195Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6c116a2940327bcc89b9aedaff0134ed4f8218a2ff88a72bfa33bd104fbe1e76\" id:\"6c116a2940327bcc89b9aedaff0134ed4f8218a2ff88a72bfa33bd104fbe1e76\" pid:4594 exited_at:{seconds:1757395072 nanos:531684113}" Sep 9 05:17:53.180614 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c116a2940327bcc89b9aedaff0134ed4f8218a2ff88a72bfa33bd104fbe1e76-rootfs.mount: Deactivated successfully. Sep 9 05:17:53.293237 kubelet[2671]: E0909 05:17:53.293188 2671 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 05:17:53.458811 containerd[1527]: time="2025-09-09T05:17:53.458677094Z" level=info msg="CreateContainer within sandbox \"189367f192365624910a9170331c4f13d2dca4d84bc588b79135bb9cd194a4f2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 05:17:53.470984 containerd[1527]: time="2025-09-09T05:17:53.470938154Z" level=info msg="Container b6699f5b5260eed2afa9f78de96101c497362e27edb519504385fedf336729b5: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:17:53.477355 containerd[1527]: time="2025-09-09T05:17:53.477322576Z" level=info msg="CreateContainer within sandbox \"189367f192365624910a9170331c4f13d2dca4d84bc588b79135bb9cd194a4f2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b6699f5b5260eed2afa9f78de96101c497362e27edb519504385fedf336729b5\"" Sep 9 05:17:53.477918 containerd[1527]: time="2025-09-09T05:17:53.477891321Z" level=info msg="StartContainer for \"b6699f5b5260eed2afa9f78de96101c497362e27edb519504385fedf336729b5\"" Sep 9 05:17:53.478597 containerd[1527]: time="2025-09-09T05:17:53.478575622Z" level=info msg="connecting to shim b6699f5b5260eed2afa9f78de96101c497362e27edb519504385fedf336729b5" address="unix:///run/containerd/s/0d10301571c5ce7ab83ef19bd6e67f174a2938f6618b7e554346892536fad006" protocol=ttrpc version=3 Sep 9 05:17:53.500975 systemd[1]: Started cri-containerd-b6699f5b5260eed2afa9f78de96101c497362e27edb519504385fedf336729b5.scope - libcontainer container b6699f5b5260eed2afa9f78de96101c497362e27edb519504385fedf336729b5. Sep 9 05:17:53.523789 systemd[1]: cri-containerd-b6699f5b5260eed2afa9f78de96101c497362e27edb519504385fedf336729b5.scope: Deactivated successfully. Sep 9 05:17:53.526919 containerd[1527]: time="2025-09-09T05:17:53.526809402Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b6699f5b5260eed2afa9f78de96101c497362e27edb519504385fedf336729b5\" id:\"b6699f5b5260eed2afa9f78de96101c497362e27edb519504385fedf336729b5\" pid:4632 exited_at:{seconds:1757395073 nanos:524187715}" Sep 9 05:17:53.527442 containerd[1527]: time="2025-09-09T05:17:53.527413865Z" level=info msg="received exit event container_id:\"b6699f5b5260eed2afa9f78de96101c497362e27edb519504385fedf336729b5\" id:\"b6699f5b5260eed2afa9f78de96101c497362e27edb519504385fedf336729b5\" pid:4632 exited_at:{seconds:1757395073 nanos:524187715}" Sep 9 05:17:53.531257 containerd[1527]: time="2025-09-09T05:17:53.531229199Z" level=info msg="StartContainer for \"b6699f5b5260eed2afa9f78de96101c497362e27edb519504385fedf336729b5\" returns successfully" Sep 9 05:17:53.557549 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6699f5b5260eed2afa9f78de96101c497362e27edb519504385fedf336729b5-rootfs.mount: Deactivated successfully. Sep 9 05:17:54.460119 containerd[1527]: time="2025-09-09T05:17:54.460081401Z" level=info msg="CreateContainer within sandbox \"189367f192365624910a9170331c4f13d2dca4d84bc588b79135bb9cd194a4f2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 05:17:54.467368 containerd[1527]: time="2025-09-09T05:17:54.467339010Z" level=info msg="Container dcc9cb0a4fec22d66c7ad3118888bc2cc0c5cc4a08f956fa69e9932874c3195a: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:17:54.479941 containerd[1527]: time="2025-09-09T05:17:54.479898041Z" level=info msg="CreateContainer within sandbox \"189367f192365624910a9170331c4f13d2dca4d84bc588b79135bb9cd194a4f2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"dcc9cb0a4fec22d66c7ad3118888bc2cc0c5cc4a08f956fa69e9932874c3195a\"" Sep 9 05:17:54.480688 containerd[1527]: time="2025-09-09T05:17:54.480664701Z" level=info msg="StartContainer for \"dcc9cb0a4fec22d66c7ad3118888bc2cc0c5cc4a08f956fa69e9932874c3195a\"" Sep 9 05:17:54.481703 containerd[1527]: time="2025-09-09T05:17:54.481681754Z" level=info msg="connecting to shim dcc9cb0a4fec22d66c7ad3118888bc2cc0c5cc4a08f956fa69e9932874c3195a" address="unix:///run/containerd/s/0d10301571c5ce7ab83ef19bd6e67f174a2938f6618b7e554346892536fad006" protocol=ttrpc version=3 Sep 9 05:17:54.504958 systemd[1]: Started cri-containerd-dcc9cb0a4fec22d66c7ad3118888bc2cc0c5cc4a08f956fa69e9932874c3195a.scope - libcontainer container dcc9cb0a4fec22d66c7ad3118888bc2cc0c5cc4a08f956fa69e9932874c3195a. Sep 9 05:17:54.536904 containerd[1527]: time="2025-09-09T05:17:54.536861108Z" level=info msg="StartContainer for \"dcc9cb0a4fec22d66c7ad3118888bc2cc0c5cc4a08f956fa69e9932874c3195a\" returns successfully" Sep 9 05:17:54.584994 containerd[1527]: time="2025-09-09T05:17:54.584947608Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dcc9cb0a4fec22d66c7ad3118888bc2cc0c5cc4a08f956fa69e9932874c3195a\" id:\"241376bbaa51c76ab6dca830da03cc7711ff205d33ee9ed3a95e46b0d3750836\" pid:4700 exited_at:{seconds:1757395074 nanos:584630376}" Sep 9 05:17:54.754915 kubelet[2671]: I0909 05:17:54.754215 2671 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-09T05:17:54Z","lastTransitionTime":"2025-09-09T05:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 9 05:17:54.797844 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 9 05:17:57.490667 containerd[1527]: time="2025-09-09T05:17:57.490625042Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dcc9cb0a4fec22d66c7ad3118888bc2cc0c5cc4a08f956fa69e9932874c3195a\" id:\"5715209fc685e59178b6a5cff56b95ecb84e2609860ef17ad0db52bc08e52e01\" pid:5156 exit_status:1 exited_at:{seconds:1757395077 nanos:490262010}" Sep 9 05:17:57.549194 systemd-networkd[1427]: lxc_health: Link UP Sep 9 05:17:57.550050 systemd-networkd[1427]: lxc_health: Gained carrier Sep 9 05:17:59.000012 systemd-networkd[1427]: lxc_health: Gained IPv6LL Sep 9 05:17:59.286534 kubelet[2671]: I0909 05:17:59.285883 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7km5n" podStartSLOduration=9.285865253 podStartE2EDuration="9.285865253s" podCreationTimestamp="2025-09-09 05:17:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:17:55.47885074 +0000 UTC m=+72.320878446" watchObservedRunningTime="2025-09-09 05:17:59.285865253 +0000 UTC m=+76.127892959" Sep 9 05:17:59.616333 containerd[1527]: time="2025-09-09T05:17:59.616194858Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dcc9cb0a4fec22d66c7ad3118888bc2cc0c5cc4a08f956fa69e9932874c3195a\" id:\"2318c36419c3b6d9b4586bf1183adc223debbb4651068386f6f03ab466342844\" pid:5239 exited_at:{seconds:1757395079 nanos:615939303}" Sep 9 05:18:01.725776 containerd[1527]: time="2025-09-09T05:18:01.725719619Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dcc9cb0a4fec22d66c7ad3118888bc2cc0c5cc4a08f956fa69e9932874c3195a\" id:\"07bb99b4d05f6c5bc767df5e1c8d35ae859fbb30de97b9cb620baf40e33cf287\" pid:5273 exited_at:{seconds:1757395081 nanos:725201667}" Sep 9 05:18:01.728464 kubelet[2671]: E0909 05:18:01.728391 2671 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:41430->127.0.0.1:46857: write tcp 127.0.0.1:41430->127.0.0.1:46857: write: connection reset by peer Sep 9 05:18:03.835348 containerd[1527]: time="2025-09-09T05:18:03.835311079Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dcc9cb0a4fec22d66c7ad3118888bc2cc0c5cc4a08f956fa69e9932874c3195a\" id:\"e6b93025216130fb78d3951451c06828d061c0cc7dfdd79d6e2eb5b3efd4065c\" pid:5297 exited_at:{seconds:1757395083 nanos:835030643}" Sep 9 05:18:03.839113 sshd[4434]: Connection closed by 10.0.0.1 port 60178 Sep 9 05:18:03.839730 sshd-session[4431]: pam_unix(sshd:session): session closed for user core Sep 9 05:18:03.843136 systemd[1]: sshd@23-10.0.0.147:22-10.0.0.1:60178.service: Deactivated successfully. Sep 9 05:18:03.844675 systemd[1]: session-24.scope: Deactivated successfully. Sep 9 05:18:03.845483 systemd-logind[1509]: Session 24 logged out. Waiting for processes to exit. Sep 9 05:18:03.846590 systemd-logind[1509]: Removed session 24.