Sep 9 05:11:09.757168 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 9 05:11:09.757189 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Tue Sep 9 03:38:34 -00 2025 Sep 9 05:11:09.757199 kernel: KASLR enabled Sep 9 05:11:09.757205 kernel: efi: EFI v2.7 by EDK II Sep 9 05:11:09.757211 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Sep 9 05:11:09.757216 kernel: random: crng init done Sep 9 05:11:09.757223 kernel: secureboot: Secure boot disabled Sep 9 05:11:09.757228 kernel: ACPI: Early table checksum verification disabled Sep 9 05:11:09.757234 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Sep 9 05:11:09.757241 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 9 05:11:09.757247 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:11:09.757252 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:11:09.757258 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:11:09.757264 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:11:09.757271 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:11:09.757278 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:11:09.757284 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:11:09.757290 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:11:09.757296 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:11:09.757302 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 9 05:11:09.757308 kernel: ACPI: Use ACPI SPCR as default console: No Sep 9 05:11:09.757314 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 05:11:09.757320 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Sep 9 05:11:09.757326 kernel: Zone ranges: Sep 9 05:11:09.757332 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 05:11:09.757339 kernel: DMA32 empty Sep 9 05:11:09.757345 kernel: Normal empty Sep 9 05:11:09.757351 kernel: Device empty Sep 9 05:11:09.757357 kernel: Movable zone start for each node Sep 9 05:11:09.757362 kernel: Early memory node ranges Sep 9 05:11:09.757368 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Sep 9 05:11:09.757374 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Sep 9 05:11:09.757380 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Sep 9 05:11:09.757386 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Sep 9 05:11:09.757392 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Sep 9 05:11:09.757398 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Sep 9 05:11:09.757404 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Sep 9 05:11:09.757411 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Sep 9 05:11:09.757417 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Sep 9 05:11:09.757423 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 9 05:11:09.757432 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 9 05:11:09.757438 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 9 05:11:09.757444 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 9 05:11:09.757452 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 05:11:09.757459 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 9 05:11:09.757465 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Sep 9 05:11:09.757471 kernel: psci: probing for conduit method from ACPI. Sep 9 05:11:09.757478 kernel: psci: PSCIv1.1 detected in firmware. Sep 9 05:11:09.757485 kernel: psci: Using standard PSCI v0.2 function IDs Sep 9 05:11:09.757491 kernel: psci: Trusted OS migration not required Sep 9 05:11:09.757497 kernel: psci: SMC Calling Convention v1.1 Sep 9 05:11:09.757504 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 9 05:11:09.757510 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 9 05:11:09.757518 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 9 05:11:09.757524 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 9 05:11:09.757530 kernel: Detected PIPT I-cache on CPU0 Sep 9 05:11:09.757537 kernel: CPU features: detected: GIC system register CPU interface Sep 9 05:11:09.757544 kernel: CPU features: detected: Spectre-v4 Sep 9 05:11:09.757550 kernel: CPU features: detected: Spectre-BHB Sep 9 05:11:09.757556 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 9 05:11:09.757563 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 9 05:11:09.757569 kernel: CPU features: detected: ARM erratum 1418040 Sep 9 05:11:09.757576 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 9 05:11:09.757582 kernel: alternatives: applying boot alternatives Sep 9 05:11:09.757590 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=1e9320fd787e27d01e3b8a1acb67e0c640346112c469b7a652e9dcfc9271bf90 Sep 9 05:11:09.757598 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 05:11:09.757605 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 05:11:09.757611 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 05:11:09.757618 kernel: Fallback order for Node 0: 0 Sep 9 05:11:09.757624 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 9 05:11:09.757631 kernel: Policy zone: DMA Sep 9 05:11:09.757637 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 05:11:09.757644 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 9 05:11:09.757650 kernel: software IO TLB: area num 4. Sep 9 05:11:09.757656 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 9 05:11:09.757670 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Sep 9 05:11:09.757680 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 9 05:11:09.757686 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 05:11:09.757693 kernel: rcu: RCU event tracing is enabled. Sep 9 05:11:09.757709 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 9 05:11:09.757716 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 05:11:09.757722 kernel: Tracing variant of Tasks RCU enabled. Sep 9 05:11:09.757729 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 05:11:09.757735 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 9 05:11:09.757742 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 05:11:09.757749 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 05:11:09.757755 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 9 05:11:09.757763 kernel: GICv3: 256 SPIs implemented Sep 9 05:11:09.757770 kernel: GICv3: 0 Extended SPIs implemented Sep 9 05:11:09.757776 kernel: Root IRQ handler: gic_handle_irq Sep 9 05:11:09.757783 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 9 05:11:09.757789 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 9 05:11:09.757796 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 9 05:11:09.757802 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 9 05:11:09.757809 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Sep 9 05:11:09.757816 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Sep 9 05:11:09.757822 kernel: GICv3: using LPI property table @0x0000000040130000 Sep 9 05:11:09.757829 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Sep 9 05:11:09.757835 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 05:11:09.757843 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 05:11:09.757849 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 9 05:11:09.757870 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 9 05:11:09.757877 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 9 05:11:09.757883 kernel: arm-pv: using stolen time PV Sep 9 05:11:09.757890 kernel: Console: colour dummy device 80x25 Sep 9 05:11:09.757897 kernel: ACPI: Core revision 20240827 Sep 9 05:11:09.757904 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 9 05:11:09.757910 kernel: pid_max: default: 32768 minimum: 301 Sep 9 05:11:09.757917 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 9 05:11:09.757925 kernel: landlock: Up and running. Sep 9 05:11:09.757931 kernel: SELinux: Initializing. Sep 9 05:11:09.757938 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 05:11:09.757944 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 05:11:09.757953 kernel: rcu: Hierarchical SRCU implementation. Sep 9 05:11:09.757960 kernel: rcu: Max phase no-delay instances is 400. Sep 9 05:11:09.757967 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 9 05:11:09.757974 kernel: Remapping and enabling EFI services. Sep 9 05:11:09.757980 kernel: smp: Bringing up secondary CPUs ... Sep 9 05:11:09.757993 kernel: Detected PIPT I-cache on CPU1 Sep 9 05:11:09.758000 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 9 05:11:09.758007 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Sep 9 05:11:09.758015 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 05:11:09.758022 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 9 05:11:09.758029 kernel: Detected PIPT I-cache on CPU2 Sep 9 05:11:09.758036 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 9 05:11:09.758044 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Sep 9 05:11:09.758052 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 05:11:09.758058 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 9 05:11:09.758065 kernel: Detected PIPT I-cache on CPU3 Sep 9 05:11:09.758072 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 9 05:11:09.758079 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Sep 9 05:11:09.758085 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 05:11:09.758092 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 9 05:11:09.758099 kernel: smp: Brought up 1 node, 4 CPUs Sep 9 05:11:09.758106 kernel: SMP: Total of 4 processors activated. Sep 9 05:11:09.758113 kernel: CPU: All CPU(s) started at EL1 Sep 9 05:11:09.758121 kernel: CPU features: detected: 32-bit EL0 Support Sep 9 05:11:09.758127 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 9 05:11:09.758134 kernel: CPU features: detected: Common not Private translations Sep 9 05:11:09.758141 kernel: CPU features: detected: CRC32 instructions Sep 9 05:11:09.758148 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 9 05:11:09.758155 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 9 05:11:09.758162 kernel: CPU features: detected: LSE atomic instructions Sep 9 05:11:09.758168 kernel: CPU features: detected: Privileged Access Never Sep 9 05:11:09.758176 kernel: CPU features: detected: RAS Extension Support Sep 9 05:11:09.758183 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 9 05:11:09.758190 kernel: alternatives: applying system-wide alternatives Sep 9 05:11:09.758197 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 9 05:11:09.758205 kernel: Memory: 2424480K/2572288K available (11136K kernel code, 2436K rwdata, 9060K rodata, 38976K init, 1038K bss, 125472K reserved, 16384K cma-reserved) Sep 9 05:11:09.758212 kernel: devtmpfs: initialized Sep 9 05:11:09.758219 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 05:11:09.758225 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 9 05:11:09.758232 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 9 05:11:09.758240 kernel: 0 pages in range for non-PLT usage Sep 9 05:11:09.758247 kernel: 508560 pages in range for PLT usage Sep 9 05:11:09.758254 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 05:11:09.758260 kernel: SMBIOS 3.0.0 present. Sep 9 05:11:09.758267 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 9 05:11:09.758274 kernel: DMI: Memory slots populated: 1/1 Sep 9 05:11:09.758281 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 05:11:09.758288 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 9 05:11:09.758295 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 9 05:11:09.758303 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 9 05:11:09.758310 kernel: audit: initializing netlink subsys (disabled) Sep 9 05:11:09.758317 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Sep 9 05:11:09.758323 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 05:11:09.758330 kernel: cpuidle: using governor menu Sep 9 05:11:09.758337 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 9 05:11:09.758344 kernel: ASID allocator initialised with 32768 entries Sep 9 05:11:09.758350 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 05:11:09.758357 kernel: Serial: AMBA PL011 UART driver Sep 9 05:11:09.758365 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 05:11:09.758372 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 05:11:09.758379 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 9 05:11:09.758386 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 9 05:11:09.758392 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 05:11:09.758399 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 05:11:09.758406 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 9 05:11:09.758413 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 9 05:11:09.758420 kernel: ACPI: Added _OSI(Module Device) Sep 9 05:11:09.758428 kernel: ACPI: Added _OSI(Processor Device) Sep 9 05:11:09.758434 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 05:11:09.758441 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 05:11:09.758448 kernel: ACPI: Interpreter enabled Sep 9 05:11:09.758455 kernel: ACPI: Using GIC for interrupt routing Sep 9 05:11:09.758461 kernel: ACPI: MCFG table detected, 1 entries Sep 9 05:11:09.758468 kernel: ACPI: CPU0 has been hot-added Sep 9 05:11:09.758475 kernel: ACPI: CPU1 has been hot-added Sep 9 05:11:09.758481 kernel: ACPI: CPU2 has been hot-added Sep 9 05:11:09.758488 kernel: ACPI: CPU3 has been hot-added Sep 9 05:11:09.758496 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 9 05:11:09.758503 kernel: printk: legacy console [ttyAMA0] enabled Sep 9 05:11:09.758510 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 05:11:09.758639 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 05:11:09.759194 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 9 05:11:09.759272 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 9 05:11:09.759331 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 9 05:11:09.759393 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 9 05:11:09.759402 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 9 05:11:09.759410 kernel: PCI host bridge to bus 0000:00 Sep 9 05:11:09.759476 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 9 05:11:09.759531 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 9 05:11:09.759583 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 9 05:11:09.759634 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 05:11:09.759754 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 9 05:11:09.759837 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 9 05:11:09.759900 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 9 05:11:09.759960 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 9 05:11:09.760018 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 9 05:11:09.760077 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 9 05:11:09.760136 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 9 05:11:09.760197 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 9 05:11:09.760251 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 9 05:11:09.760303 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 9 05:11:09.760355 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 9 05:11:09.760364 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 9 05:11:09.760372 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 9 05:11:09.760379 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 9 05:11:09.760388 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 9 05:11:09.760394 kernel: iommu: Default domain type: Translated Sep 9 05:11:09.760402 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 9 05:11:09.760408 kernel: efivars: Registered efivars operations Sep 9 05:11:09.760415 kernel: vgaarb: loaded Sep 9 05:11:09.760422 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 9 05:11:09.760429 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 05:11:09.760436 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 05:11:09.760443 kernel: pnp: PnP ACPI init Sep 9 05:11:09.760507 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 9 05:11:09.760517 kernel: pnp: PnP ACPI: found 1 devices Sep 9 05:11:09.760524 kernel: NET: Registered PF_INET protocol family Sep 9 05:11:09.760531 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 05:11:09.760538 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 9 05:11:09.760546 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 05:11:09.760553 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 05:11:09.760560 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 9 05:11:09.760568 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 9 05:11:09.760575 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 05:11:09.760582 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 05:11:09.760589 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 05:11:09.760595 kernel: PCI: CLS 0 bytes, default 64 Sep 9 05:11:09.760602 kernel: kvm [1]: HYP mode not available Sep 9 05:11:09.760609 kernel: Initialise system trusted keyrings Sep 9 05:11:09.760616 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 9 05:11:09.760622 kernel: Key type asymmetric registered Sep 9 05:11:09.760630 kernel: Asymmetric key parser 'x509' registered Sep 9 05:11:09.760637 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 9 05:11:09.760644 kernel: io scheduler mq-deadline registered Sep 9 05:11:09.760651 kernel: io scheduler kyber registered Sep 9 05:11:09.760658 kernel: io scheduler bfq registered Sep 9 05:11:09.760672 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 9 05:11:09.760679 kernel: ACPI: button: Power Button [PWRB] Sep 9 05:11:09.760686 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 9 05:11:09.760810 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 9 05:11:09.760824 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 05:11:09.760831 kernel: thunder_xcv, ver 1.0 Sep 9 05:11:09.760838 kernel: thunder_bgx, ver 1.0 Sep 9 05:11:09.760845 kernel: nicpf, ver 1.0 Sep 9 05:11:09.760852 kernel: nicvf, ver 1.0 Sep 9 05:11:09.760921 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 9 05:11:09.760978 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-09T05:11:09 UTC (1757394669) Sep 9 05:11:09.760988 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 9 05:11:09.760997 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 9 05:11:09.761004 kernel: watchdog: NMI not fully supported Sep 9 05:11:09.761011 kernel: watchdog: Hard watchdog permanently disabled Sep 9 05:11:09.761018 kernel: NET: Registered PF_INET6 protocol family Sep 9 05:11:09.761025 kernel: Segment Routing with IPv6 Sep 9 05:11:09.761032 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 05:11:09.761039 kernel: NET: Registered PF_PACKET protocol family Sep 9 05:11:09.761046 kernel: Key type dns_resolver registered Sep 9 05:11:09.761053 kernel: registered taskstats version 1 Sep 9 05:11:09.761060 kernel: Loading compiled-in X.509 certificates Sep 9 05:11:09.761068 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: 44d1e8b5c5ffbaa3cedd99c03d41580671fabec5' Sep 9 05:11:09.761076 kernel: Demotion targets for Node 0: null Sep 9 05:11:09.761083 kernel: Key type .fscrypt registered Sep 9 05:11:09.761089 kernel: Key type fscrypt-provisioning registered Sep 9 05:11:09.761096 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 05:11:09.761103 kernel: ima: Allocated hash algorithm: sha1 Sep 9 05:11:09.761110 kernel: ima: No architecture policies found Sep 9 05:11:09.761117 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 9 05:11:09.761125 kernel: clk: Disabling unused clocks Sep 9 05:11:09.761132 kernel: PM: genpd: Disabling unused power domains Sep 9 05:11:09.761139 kernel: Warning: unable to open an initial console. Sep 9 05:11:09.761146 kernel: Freeing unused kernel memory: 38976K Sep 9 05:11:09.761153 kernel: Run /init as init process Sep 9 05:11:09.761159 kernel: with arguments: Sep 9 05:11:09.761166 kernel: /init Sep 9 05:11:09.761173 kernel: with environment: Sep 9 05:11:09.761179 kernel: HOME=/ Sep 9 05:11:09.761188 kernel: TERM=linux Sep 9 05:11:09.761194 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 05:11:09.761202 systemd[1]: Successfully made /usr/ read-only. Sep 9 05:11:09.761212 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 05:11:09.761220 systemd[1]: Detected virtualization kvm. Sep 9 05:11:09.761227 systemd[1]: Detected architecture arm64. Sep 9 05:11:09.761234 systemd[1]: Running in initrd. Sep 9 05:11:09.761242 systemd[1]: No hostname configured, using default hostname. Sep 9 05:11:09.761251 systemd[1]: Hostname set to . Sep 9 05:11:09.761258 systemd[1]: Initializing machine ID from VM UUID. Sep 9 05:11:09.761265 systemd[1]: Queued start job for default target initrd.target. Sep 9 05:11:09.761273 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 05:11:09.761280 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 05:11:09.761288 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 05:11:09.761296 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 05:11:09.761304 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 05:11:09.761313 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 05:11:09.761322 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 05:11:09.761329 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 05:11:09.761337 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 05:11:09.761344 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 05:11:09.761352 systemd[1]: Reached target paths.target - Path Units. Sep 9 05:11:09.761359 systemd[1]: Reached target slices.target - Slice Units. Sep 9 05:11:09.761368 systemd[1]: Reached target swap.target - Swaps. Sep 9 05:11:09.761375 systemd[1]: Reached target timers.target - Timer Units. Sep 9 05:11:09.761382 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 05:11:09.761390 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 05:11:09.761397 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 05:11:09.761405 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 9 05:11:09.761412 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 05:11:09.761420 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 05:11:09.761428 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 05:11:09.761436 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 05:11:09.761443 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 05:11:09.761450 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 05:11:09.761458 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 05:11:09.761466 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 9 05:11:09.761473 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 05:11:09.761480 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 05:11:09.761488 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 05:11:09.761496 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:11:09.761504 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 05:11:09.761511 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 05:11:09.761519 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 05:11:09.761541 systemd-journald[244]: Collecting audit messages is disabled. Sep 9 05:11:09.761559 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 05:11:09.761567 systemd-journald[244]: Journal started Sep 9 05:11:09.761586 systemd-journald[244]: Runtime Journal (/run/log/journal/7dfa0af10a074ae9b4bd1294ccec0902) is 6M, max 48.5M, 42.4M free. Sep 9 05:11:09.771815 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 05:11:09.771863 kernel: Bridge firewalling registered Sep 9 05:11:09.771882 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:11:09.756158 systemd-modules-load[245]: Inserted module 'overlay' Sep 9 05:11:09.770350 systemd-modules-load[245]: Inserted module 'br_netfilter' Sep 9 05:11:09.777849 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 05:11:09.778268 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 05:11:09.779500 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 05:11:09.783550 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 05:11:09.785300 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 05:11:09.787555 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 05:11:09.793895 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 05:11:09.799150 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 05:11:09.802435 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 05:11:09.802916 systemd-tmpfiles[269]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 9 05:11:09.804675 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 05:11:09.806752 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 05:11:09.809356 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 05:11:09.811685 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 05:11:09.831610 dracut-cmdline[287]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=1e9320fd787e27d01e3b8a1acb67e0c640346112c469b7a652e9dcfc9271bf90 Sep 9 05:11:09.845350 systemd-resolved[288]: Positive Trust Anchors: Sep 9 05:11:09.845369 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 05:11:09.845400 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 05:11:09.850036 systemd-resolved[288]: Defaulting to hostname 'linux'. Sep 9 05:11:09.850943 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 05:11:09.854865 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 05:11:09.899729 kernel: SCSI subsystem initialized Sep 9 05:11:09.904723 kernel: Loading iSCSI transport class v2.0-870. Sep 9 05:11:09.911724 kernel: iscsi: registered transport (tcp) Sep 9 05:11:09.923727 kernel: iscsi: registered transport (qla4xxx) Sep 9 05:11:09.924739 kernel: QLogic iSCSI HBA Driver Sep 9 05:11:09.939875 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 05:11:09.960740 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 05:11:09.962253 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 05:11:10.007374 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 05:11:10.009591 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 05:11:10.069719 kernel: raid6: neonx8 gen() 15745 MB/s Sep 9 05:11:10.086714 kernel: raid6: neonx4 gen() 15795 MB/s Sep 9 05:11:10.103721 kernel: raid6: neonx2 gen() 13199 MB/s Sep 9 05:11:10.120719 kernel: raid6: neonx1 gen() 10425 MB/s Sep 9 05:11:10.137723 kernel: raid6: int64x8 gen() 6878 MB/s Sep 9 05:11:10.154730 kernel: raid6: int64x4 gen() 7353 MB/s Sep 9 05:11:10.171728 kernel: raid6: int64x2 gen() 6104 MB/s Sep 9 05:11:10.188726 kernel: raid6: int64x1 gen() 5058 MB/s Sep 9 05:11:10.188753 kernel: raid6: using algorithm neonx4 gen() 15795 MB/s Sep 9 05:11:10.205732 kernel: raid6: .... xor() 12340 MB/s, rmw enabled Sep 9 05:11:10.205762 kernel: raid6: using neon recovery algorithm Sep 9 05:11:10.210720 kernel: xor: measuring software checksum speed Sep 9 05:11:10.210736 kernel: 8regs : 21607 MB/sec Sep 9 05:11:10.211748 kernel: 32regs : 19358 MB/sec Sep 9 05:11:10.211775 kernel: arm64_neon : 28099 MB/sec Sep 9 05:11:10.211793 kernel: xor: using function: arm64_neon (28099 MB/sec) Sep 9 05:11:10.263737 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 05:11:10.269572 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 05:11:10.272181 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 05:11:10.304140 systemd-udevd[498]: Using default interface naming scheme 'v255'. Sep 9 05:11:10.308159 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 05:11:10.310598 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 05:11:10.335798 dracut-pre-trigger[508]: rd.md=0: removing MD RAID activation Sep 9 05:11:10.357957 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 05:11:10.360386 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 05:11:10.414945 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 05:11:10.417818 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 05:11:10.462495 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 9 05:11:10.462626 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 9 05:11:10.468932 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 05:11:10.472801 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 05:11:10.472819 kernel: GPT:9289727 != 19775487 Sep 9 05:11:10.472828 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 05:11:10.472838 kernel: GPT:9289727 != 19775487 Sep 9 05:11:10.472846 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 05:11:10.472861 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 05:11:10.469040 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:11:10.474864 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:11:10.476668 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:11:10.502227 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 9 05:11:10.503682 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 9 05:11:10.506193 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:11:10.515383 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 05:11:10.517687 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 05:11:10.530713 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 9 05:11:10.538814 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 9 05:11:10.544037 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 05:11:10.545063 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 05:11:10.547125 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 05:11:10.549762 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 05:11:10.551308 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 05:11:10.572239 disk-uuid[591]: Primary Header is updated. Sep 9 05:11:10.572239 disk-uuid[591]: Secondary Entries is updated. Sep 9 05:11:10.572239 disk-uuid[591]: Secondary Header is updated. Sep 9 05:11:10.574872 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 05:11:10.577889 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 05:11:11.581754 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 05:11:11.582626 disk-uuid[595]: The operation has completed successfully. Sep 9 05:11:11.610775 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 05:11:11.610869 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 05:11:11.630296 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 05:11:11.643478 sh[612]: Success Sep 9 05:11:11.656061 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 05:11:11.656099 kernel: device-mapper: uevent: version 1.0.3 Sep 9 05:11:11.656111 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 9 05:11:11.662740 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 9 05:11:11.685925 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 05:11:11.688508 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 05:11:11.705802 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 05:11:11.712720 kernel: BTRFS: device fsid 72a0ff35-b4e8-4772-9a8d-d0e90c3fb364 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (625) Sep 9 05:11:11.714419 kernel: BTRFS info (device dm-0): first mount of filesystem 72a0ff35-b4e8-4772-9a8d-d0e90c3fb364 Sep 9 05:11:11.714437 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 9 05:11:11.717903 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 05:11:11.717922 kernel: BTRFS info (device dm-0): enabling free space tree Sep 9 05:11:11.718889 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 05:11:11.720214 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 9 05:11:11.721765 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 05:11:11.722410 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 05:11:11.724202 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 05:11:11.749458 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (656) Sep 9 05:11:11.749502 kernel: BTRFS info (device vda6): first mount of filesystem ea68277c-dabb-41e9-9258-b2fe475f0ae6 Sep 9 05:11:11.749513 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 05:11:11.752755 kernel: BTRFS info (device vda6): turning on async discard Sep 9 05:11:11.752787 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 05:11:11.756726 kernel: BTRFS info (device vda6): last unmount of filesystem ea68277c-dabb-41e9-9258-b2fe475f0ae6 Sep 9 05:11:11.758730 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 05:11:11.760589 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 05:11:11.823071 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 05:11:11.826365 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 05:11:11.862015 systemd-networkd[800]: lo: Link UP Sep 9 05:11:11.862674 systemd-networkd[800]: lo: Gained carrier Sep 9 05:11:11.863417 systemd-networkd[800]: Enumeration completed Sep 9 05:11:11.863573 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 05:11:11.864122 systemd-networkd[800]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 05:11:11.864125 systemd-networkd[800]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 05:11:11.865102 systemd[1]: Reached target network.target - Network. Sep 9 05:11:11.866512 systemd-networkd[800]: eth0: Link UP Sep 9 05:11:11.866672 systemd-networkd[800]: eth0: Gained carrier Sep 9 05:11:11.866681 systemd-networkd[800]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 05:11:11.873810 ignition[703]: Ignition 2.22.0 Sep 9 05:11:11.873815 ignition[703]: Stage: fetch-offline Sep 9 05:11:11.873845 ignition[703]: no configs at "/usr/lib/ignition/base.d" Sep 9 05:11:11.873853 ignition[703]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 05:11:11.873928 ignition[703]: parsed url from cmdline: "" Sep 9 05:11:11.873931 ignition[703]: no config URL provided Sep 9 05:11:11.873935 ignition[703]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 05:11:11.873940 ignition[703]: no config at "/usr/lib/ignition/user.ign" Sep 9 05:11:11.873956 ignition[703]: op(1): [started] loading QEMU firmware config module Sep 9 05:11:11.873960 ignition[703]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 9 05:11:11.878595 ignition[703]: op(1): [finished] loading QEMU firmware config module Sep 9 05:11:11.887777 systemd-networkd[800]: eth0: DHCPv4 address 10.0.0.133/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 05:11:11.922132 ignition[703]: parsing config with SHA512: 75dcba6672afc7d521c03f70b1a2ce6bc846d96c7980e81896ccad8f9cc7b8b0f00cd1ec764f58fa7cc8724b20fca1258e3aaaf901d6d9b7991720b6efc90b36 Sep 9 05:11:11.926320 unknown[703]: fetched base config from "system" Sep 9 05:11:11.926330 unknown[703]: fetched user config from "qemu" Sep 9 05:11:11.926728 ignition[703]: fetch-offline: fetch-offline passed Sep 9 05:11:11.926778 ignition[703]: Ignition finished successfully Sep 9 05:11:11.928625 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 05:11:11.930422 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 9 05:11:11.931146 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 05:11:11.963094 ignition[812]: Ignition 2.22.0 Sep 9 05:11:11.963109 ignition[812]: Stage: kargs Sep 9 05:11:11.963232 ignition[812]: no configs at "/usr/lib/ignition/base.d" Sep 9 05:11:11.963241 ignition[812]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 05:11:11.965996 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 05:11:11.963968 ignition[812]: kargs: kargs passed Sep 9 05:11:11.968571 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 05:11:11.964005 ignition[812]: Ignition finished successfully Sep 9 05:11:11.997010 ignition[821]: Ignition 2.22.0 Sep 9 05:11:11.997027 ignition[821]: Stage: disks Sep 9 05:11:11.997144 ignition[821]: no configs at "/usr/lib/ignition/base.d" Sep 9 05:11:12.000268 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 05:11:11.997152 ignition[821]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 05:11:12.001673 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 05:11:11.997872 ignition[821]: disks: disks passed Sep 9 05:11:12.003634 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 05:11:11.997916 ignition[821]: Ignition finished successfully Sep 9 05:11:12.005922 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 05:11:12.007870 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 05:11:12.009405 systemd[1]: Reached target basic.target - Basic System. Sep 9 05:11:12.012223 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 05:11:12.047891 systemd-fsck[831]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 9 05:11:12.052563 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 05:11:12.054739 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 05:11:12.111732 kernel: EXT4-fs (vda9): mounted filesystem 88574756-967d-44b3-be66-46689c8baf27 r/w with ordered data mode. Quota mode: none. Sep 9 05:11:12.112354 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 05:11:12.113611 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 05:11:12.116677 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 05:11:12.118831 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 05:11:12.119645 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 9 05:11:12.119691 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 05:11:12.119725 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 05:11:12.138044 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 05:11:12.140354 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 05:11:12.144711 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (839) Sep 9 05:11:12.144732 kernel: BTRFS info (device vda6): first mount of filesystem ea68277c-dabb-41e9-9258-b2fe475f0ae6 Sep 9 05:11:12.144742 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 05:11:12.146880 kernel: BTRFS info (device vda6): turning on async discard Sep 9 05:11:12.146915 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 05:11:12.147852 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 05:11:12.174262 initrd-setup-root[865]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 05:11:12.178099 initrd-setup-root[872]: cut: /sysroot/etc/group: No such file or directory Sep 9 05:11:12.181336 initrd-setup-root[879]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 05:11:12.185007 initrd-setup-root[886]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 05:11:12.246687 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 05:11:12.248898 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 05:11:12.250400 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 05:11:12.263816 kernel: BTRFS info (device vda6): last unmount of filesystem ea68277c-dabb-41e9-9258-b2fe475f0ae6 Sep 9 05:11:12.273786 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 05:11:12.284843 ignition[955]: INFO : Ignition 2.22.0 Sep 9 05:11:12.284843 ignition[955]: INFO : Stage: mount Sep 9 05:11:12.286193 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 05:11:12.286193 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 05:11:12.286193 ignition[955]: INFO : mount: mount passed Sep 9 05:11:12.286193 ignition[955]: INFO : Ignition finished successfully Sep 9 05:11:12.288347 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 05:11:12.290476 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 05:11:12.842121 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 05:11:12.843563 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 05:11:12.859718 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (967) Sep 9 05:11:12.861727 kernel: BTRFS info (device vda6): first mount of filesystem ea68277c-dabb-41e9-9258-b2fe475f0ae6 Sep 9 05:11:12.861745 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 05:11:12.863927 kernel: BTRFS info (device vda6): turning on async discard Sep 9 05:11:12.863944 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 05:11:12.865169 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 05:11:12.901530 ignition[984]: INFO : Ignition 2.22.0 Sep 9 05:11:12.901530 ignition[984]: INFO : Stage: files Sep 9 05:11:12.902837 ignition[984]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 05:11:12.902837 ignition[984]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 05:11:12.902837 ignition[984]: DEBUG : files: compiled without relabeling support, skipping Sep 9 05:11:12.905587 ignition[984]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 05:11:12.905587 ignition[984]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 05:11:12.905587 ignition[984]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 05:11:12.905587 ignition[984]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 05:11:12.905587 ignition[984]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 05:11:12.905423 unknown[984]: wrote ssh authorized keys file for user: core Sep 9 05:11:12.911380 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 9 05:11:12.911380 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 9 05:11:13.038457 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 05:11:13.161338 systemd-networkd[800]: eth0: Gained IPv6LL Sep 9 05:11:13.305853 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 9 05:11:13.307515 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 05:11:13.307515 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 9 05:11:13.524814 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 9 05:11:13.635989 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 05:11:13.635989 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 9 05:11:13.635989 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 05:11:13.640393 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 05:11:13.640393 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 05:11:13.640393 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 05:11:13.640393 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 05:11:13.640393 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 05:11:13.640393 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 05:11:13.640393 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 05:11:13.640393 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 05:11:13.640393 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 9 05:11:13.652741 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 9 05:11:13.652741 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 9 05:11:13.652741 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 9 05:11:13.926069 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 9 05:11:14.262879 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 9 05:11:14.262879 ignition[984]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 9 05:11:14.265928 ignition[984]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 05:11:14.265928 ignition[984]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 05:11:14.265928 ignition[984]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 9 05:11:14.265928 ignition[984]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 9 05:11:14.265928 ignition[984]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 05:11:14.265928 ignition[984]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 05:11:14.265928 ignition[984]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 9 05:11:14.265928 ignition[984]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 9 05:11:14.279033 ignition[984]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 05:11:14.281859 ignition[984]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 05:11:14.284345 ignition[984]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 9 05:11:14.284345 ignition[984]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 9 05:11:14.284345 ignition[984]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 05:11:14.284345 ignition[984]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 05:11:14.284345 ignition[984]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 05:11:14.284345 ignition[984]: INFO : files: files passed Sep 9 05:11:14.284345 ignition[984]: INFO : Ignition finished successfully Sep 9 05:11:14.284951 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 05:11:14.287528 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 05:11:14.289397 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 05:11:14.302396 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 05:11:14.302478 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 05:11:14.305064 initrd-setup-root-after-ignition[1012]: grep: /sysroot/oem/oem-release: No such file or directory Sep 9 05:11:14.306529 initrd-setup-root-after-ignition[1015]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 05:11:14.306529 initrd-setup-root-after-ignition[1015]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 05:11:14.309291 initrd-setup-root-after-ignition[1019]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 05:11:14.309055 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 05:11:14.310475 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 05:11:14.312773 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 05:11:14.353028 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 05:11:14.353135 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 05:11:14.355225 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 05:11:14.356842 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 05:11:14.358481 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 05:11:14.359259 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 05:11:14.396737 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 05:11:14.399114 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 05:11:14.419149 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 05:11:14.420405 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 05:11:14.422387 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 05:11:14.423960 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 05:11:14.424071 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 05:11:14.426335 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 05:11:14.428128 systemd[1]: Stopped target basic.target - Basic System. Sep 9 05:11:14.429621 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 05:11:14.431242 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 05:11:14.432971 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 05:11:14.434864 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 9 05:11:14.436748 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 05:11:14.438591 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 05:11:14.440475 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 05:11:14.442352 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 05:11:14.444085 systemd[1]: Stopped target swap.target - Swaps. Sep 9 05:11:14.445524 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 05:11:14.445676 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 05:11:14.447777 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 05:11:14.449567 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 05:11:14.451372 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 05:11:14.451489 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 05:11:14.453302 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 05:11:14.453433 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 05:11:14.455849 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 05:11:14.455978 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 05:11:14.457841 systemd[1]: Stopped target paths.target - Path Units. Sep 9 05:11:14.459280 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 05:11:14.459396 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 05:11:14.461131 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 05:11:14.462743 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 05:11:14.464219 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 05:11:14.464310 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 05:11:14.465796 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 05:11:14.465878 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 05:11:14.467983 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 05:11:14.468803 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 05:11:14.470037 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 05:11:14.470151 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 05:11:14.472069 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 05:11:14.473036 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 05:11:14.473157 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 05:11:14.475473 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 05:11:14.477044 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 05:11:14.477169 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 05:11:14.478612 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 05:11:14.478732 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 05:11:14.484712 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 05:11:14.486749 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 05:11:14.490545 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 05:11:14.496182 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 05:11:14.496290 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 05:11:14.500916 ignition[1039]: INFO : Ignition 2.22.0 Sep 9 05:11:14.500916 ignition[1039]: INFO : Stage: umount Sep 9 05:11:14.503188 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 05:11:14.503188 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 05:11:14.503188 ignition[1039]: INFO : umount: umount passed Sep 9 05:11:14.503188 ignition[1039]: INFO : Ignition finished successfully Sep 9 05:11:14.504008 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 05:11:14.504117 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 05:11:14.505943 systemd[1]: Stopped target network.target - Network. Sep 9 05:11:14.507904 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 05:11:14.507989 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 05:11:14.509300 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 05:11:14.509342 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 05:11:14.510613 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 05:11:14.510665 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 05:11:14.512212 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 05:11:14.512252 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 05:11:14.513531 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 05:11:14.513571 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 05:11:14.515808 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 05:11:14.517395 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 05:11:14.525427 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 05:11:14.525542 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 05:11:14.528471 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 9 05:11:14.528833 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 05:11:14.528876 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 05:11:14.531613 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 9 05:11:14.531927 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 05:11:14.532030 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 05:11:14.537461 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 9 05:11:14.537991 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 9 05:11:14.539475 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 05:11:14.539519 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 05:11:14.542113 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 05:11:14.542911 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 05:11:14.542977 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 05:11:14.544716 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 05:11:14.544763 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 05:11:14.547012 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 05:11:14.547059 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 05:11:14.548867 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 05:11:14.553497 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 05:11:14.561755 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 05:11:14.561881 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 05:11:14.564144 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 05:11:14.564295 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 05:11:14.566258 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 05:11:14.566300 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 05:11:14.567969 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 05:11:14.568005 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 05:11:14.569786 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 05:11:14.569839 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 05:11:14.572398 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 05:11:14.572448 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 05:11:14.575490 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 05:11:14.575543 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 05:11:14.579203 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 05:11:14.580864 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 9 05:11:14.580925 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 05:11:14.583994 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 05:11:14.584034 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 05:11:14.587053 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 05:11:14.587097 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:11:14.594183 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 05:11:14.594304 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 05:11:14.596438 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 05:11:14.598661 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 05:11:14.616446 systemd[1]: Switching root. Sep 9 05:11:14.650780 systemd-journald[244]: Journal stopped Sep 9 05:11:15.406630 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Sep 9 05:11:15.406697 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 05:11:15.406732 kernel: SELinux: policy capability open_perms=1 Sep 9 05:11:15.406743 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 05:11:15.406754 kernel: SELinux: policy capability always_check_network=0 Sep 9 05:11:15.406763 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 05:11:15.406772 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 05:11:15.406781 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 05:11:15.406790 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 05:11:15.406799 kernel: SELinux: policy capability userspace_initial_context=0 Sep 9 05:11:15.406812 kernel: audit: type=1403 audit(1757394674.850:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 05:11:15.406828 systemd[1]: Successfully loaded SELinux policy in 65.647ms. Sep 9 05:11:15.406841 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.091ms. Sep 9 05:11:15.406854 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 05:11:15.406864 systemd[1]: Detected virtualization kvm. Sep 9 05:11:15.406874 systemd[1]: Detected architecture arm64. Sep 9 05:11:15.406884 systemd[1]: Detected first boot. Sep 9 05:11:15.406894 systemd[1]: Initializing machine ID from VM UUID. Sep 9 05:11:15.406903 kernel: NET: Registered PF_VSOCK protocol family Sep 9 05:11:15.406914 zram_generator::config[1085]: No configuration found. Sep 9 05:11:15.406924 systemd[1]: Populated /etc with preset unit settings. Sep 9 05:11:15.406936 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 9 05:11:15.406946 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 05:11:15.406956 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 05:11:15.406967 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 05:11:15.406976 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 05:11:15.406991 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 05:11:15.407001 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 05:11:15.407012 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 05:11:15.407022 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 05:11:15.407034 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 05:11:15.407044 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 05:11:15.407054 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 05:11:15.407064 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 05:11:15.407074 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 05:11:15.407084 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 05:11:15.407095 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 05:11:15.407105 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 05:11:15.407116 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 05:11:15.407126 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 9 05:11:15.407136 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 05:11:15.407146 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 05:11:15.407156 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 05:11:15.407166 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 05:11:15.407175 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 05:11:15.407185 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 05:11:15.407197 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 05:11:15.407220 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 05:11:15.407230 systemd[1]: Reached target slices.target - Slice Units. Sep 9 05:11:15.407240 systemd[1]: Reached target swap.target - Swaps. Sep 9 05:11:15.407250 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 05:11:15.407260 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 05:11:15.407269 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 9 05:11:15.407279 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 05:11:15.407289 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 05:11:15.407299 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 05:11:15.407311 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 05:11:15.407321 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 05:11:15.407331 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 05:11:15.407340 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 05:11:15.407350 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 05:11:15.407360 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 05:11:15.407370 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 05:11:15.407380 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 05:11:15.407392 systemd[1]: Reached target machines.target - Containers. Sep 9 05:11:15.407402 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 05:11:15.407412 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 05:11:15.407423 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 05:11:15.407433 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 05:11:15.407443 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 05:11:15.407452 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 05:11:15.407462 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 05:11:15.407471 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 05:11:15.407483 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 05:11:15.407493 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 05:11:15.407507 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 05:11:15.407517 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 05:11:15.407526 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 05:11:15.407537 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 05:11:15.407547 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 05:11:15.407557 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 05:11:15.407569 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 05:11:15.407579 kernel: loop: module loaded Sep 9 05:11:15.407589 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 05:11:15.407598 kernel: fuse: init (API version 7.41) Sep 9 05:11:15.407607 kernel: ACPI: bus type drm_connector registered Sep 9 05:11:15.407617 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 05:11:15.407627 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 9 05:11:15.407642 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 05:11:15.407658 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 05:11:15.407668 systemd[1]: Stopped verity-setup.service. Sep 9 05:11:15.407677 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 05:11:15.407688 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 05:11:15.407698 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 05:11:15.408340 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 05:11:15.408368 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 05:11:15.408410 systemd-journald[1149]: Collecting audit messages is disabled. Sep 9 05:11:15.408434 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 05:11:15.408445 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 05:11:15.408456 systemd-journald[1149]: Journal started Sep 9 05:11:15.408478 systemd-journald[1149]: Runtime Journal (/run/log/journal/7dfa0af10a074ae9b4bd1294ccec0902) is 6M, max 48.5M, 42.4M free. Sep 9 05:11:15.202251 systemd[1]: Queued start job for default target multi-user.target. Sep 9 05:11:15.226787 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 9 05:11:15.227175 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 05:11:15.413009 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 05:11:15.413903 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 05:11:15.414098 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 05:11:15.415678 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 05:11:15.417076 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 05:11:15.417244 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 05:11:15.419751 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 05:11:15.420019 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 05:11:15.421078 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 05:11:15.421252 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 05:11:15.422479 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 05:11:15.422649 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 05:11:15.423926 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 05:11:15.424110 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 05:11:15.425355 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 05:11:15.426555 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 05:11:15.428034 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 05:11:15.429238 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 9 05:11:15.441431 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 05:11:15.443695 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 05:11:15.445572 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 05:11:15.446590 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 05:11:15.446619 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 05:11:15.449222 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 9 05:11:15.454589 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 05:11:15.455633 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 05:11:15.457134 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 05:11:15.458970 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 05:11:15.459925 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 05:11:15.460956 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 05:11:15.462013 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 05:11:15.464071 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 05:11:15.468855 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 05:11:15.471856 systemd-journald[1149]: Time spent on flushing to /var/log/journal/7dfa0af10a074ae9b4bd1294ccec0902 is 20.860ms for 888 entries. Sep 9 05:11:15.471856 systemd-journald[1149]: System Journal (/var/log/journal/7dfa0af10a074ae9b4bd1294ccec0902) is 8M, max 195.6M, 187.6M free. Sep 9 05:11:15.508953 systemd-journald[1149]: Received client request to flush runtime journal. Sep 9 05:11:15.509007 kernel: loop0: detected capacity change from 0 to 100632 Sep 9 05:11:15.471398 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 05:11:15.475196 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 05:11:15.476907 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 05:11:15.479200 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 05:11:15.483186 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 05:11:15.486689 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 05:11:15.495635 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 9 05:11:15.498166 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 05:11:15.513970 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 05:11:15.519879 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 05:11:15.516386 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 05:11:15.522382 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 05:11:15.536059 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 9 05:11:15.539794 kernel: loop1: detected capacity change from 0 to 119368 Sep 9 05:11:15.541290 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. Sep 9 05:11:15.541310 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. Sep 9 05:11:15.544783 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 05:11:15.565780 kernel: loop2: detected capacity change from 0 to 211168 Sep 9 05:11:15.594745 kernel: loop3: detected capacity change from 0 to 100632 Sep 9 05:11:15.600734 kernel: loop4: detected capacity change from 0 to 119368 Sep 9 05:11:15.606735 kernel: loop5: detected capacity change from 0 to 211168 Sep 9 05:11:15.610429 (sd-merge)[1223]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 9 05:11:15.610830 (sd-merge)[1223]: Merged extensions into '/usr'. Sep 9 05:11:15.614315 systemd[1]: Reload requested from client PID 1200 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 05:11:15.614332 systemd[1]: Reloading... Sep 9 05:11:15.665026 zram_generator::config[1248]: No configuration found. Sep 9 05:11:15.741284 ldconfig[1195]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 05:11:15.800560 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 05:11:15.801091 systemd[1]: Reloading finished in 186 ms. Sep 9 05:11:15.831725 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 05:11:15.833099 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 05:11:15.849863 systemd[1]: Starting ensure-sysext.service... Sep 9 05:11:15.851674 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 05:11:15.860674 systemd[1]: Reload requested from client PID 1283 ('systemctl') (unit ensure-sysext.service)... Sep 9 05:11:15.860691 systemd[1]: Reloading... Sep 9 05:11:15.865035 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 9 05:11:15.865072 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 9 05:11:15.865294 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 05:11:15.865492 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 05:11:15.866126 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 05:11:15.866334 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. Sep 9 05:11:15.866381 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. Sep 9 05:11:15.869189 systemd-tmpfiles[1284]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 05:11:15.869205 systemd-tmpfiles[1284]: Skipping /boot Sep 9 05:11:15.875078 systemd-tmpfiles[1284]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 05:11:15.875091 systemd-tmpfiles[1284]: Skipping /boot Sep 9 05:11:15.904548 zram_generator::config[1312]: No configuration found. Sep 9 05:11:16.030744 systemd[1]: Reloading finished in 169 ms. Sep 9 05:11:16.041269 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 05:11:16.046811 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 05:11:16.055628 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 05:11:16.057776 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 05:11:16.067868 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 05:11:16.070832 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 05:11:16.075947 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 05:11:16.078507 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 05:11:16.083339 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 05:11:16.089860 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 05:11:16.098961 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 05:11:16.101250 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 05:11:16.102222 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 05:11:16.102340 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 05:11:16.103850 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 05:11:16.105555 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 05:11:16.107589 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 05:11:16.109383 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 05:11:16.109532 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 05:11:16.111326 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 05:11:16.111477 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 05:11:16.113047 augenrules[1376]: No rules Sep 9 05:11:16.115187 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 05:11:16.115380 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 05:11:16.122297 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 05:11:16.123730 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 05:11:16.127134 systemd-udevd[1357]: Using default interface naming scheme 'v255'. Sep 9 05:11:16.129742 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 05:11:16.133598 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 05:11:16.134865 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 05:11:16.137957 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 05:11:16.149923 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 05:11:16.153963 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 05:11:16.155962 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 05:11:16.157549 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 05:11:16.157680 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 05:11:16.162778 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 05:11:16.164776 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 05:11:16.165886 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 05:11:16.167529 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 05:11:16.170740 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 05:11:16.181719 augenrules[1388]: /sbin/augenrules: No change Sep 9 05:11:16.187879 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 05:11:16.190536 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 05:11:16.192771 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 05:11:16.194274 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 05:11:16.194414 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 05:11:16.195926 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 05:11:16.196081 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 05:11:16.198930 augenrules[1442]: No rules Sep 9 05:11:16.199865 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 05:11:16.200850 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 05:11:16.205289 systemd[1]: Finished ensure-sysext.service. Sep 9 05:11:16.209375 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 05:11:16.221004 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 05:11:16.221774 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 05:11:16.221840 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 05:11:16.224810 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 9 05:11:16.236912 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 9 05:11:16.258184 systemd-resolved[1351]: Positive Trust Anchors: Sep 9 05:11:16.258489 systemd-resolved[1351]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 05:11:16.258577 systemd-resolved[1351]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 05:11:16.266833 systemd-resolved[1351]: Defaulting to hostname 'linux'. Sep 9 05:11:16.268413 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 05:11:16.269511 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 05:11:16.297393 systemd-networkd[1454]: lo: Link UP Sep 9 05:11:16.297400 systemd-networkd[1454]: lo: Gained carrier Sep 9 05:11:16.298291 systemd-networkd[1454]: Enumeration completed Sep 9 05:11:16.298402 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 05:11:16.298730 systemd-networkd[1454]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 05:11:16.298734 systemd-networkd[1454]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 05:11:16.299293 systemd-networkd[1454]: eth0: Link UP Sep 9 05:11:16.299417 systemd-networkd[1454]: eth0: Gained carrier Sep 9 05:11:16.299431 systemd-networkd[1454]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 05:11:16.299862 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 9 05:11:16.301139 systemd[1]: Reached target network.target - Network. Sep 9 05:11:16.302337 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 05:11:16.303818 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 05:11:16.305038 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 05:11:16.306124 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 05:11:16.307294 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 05:11:16.307324 systemd[1]: Reached target paths.target - Path Units. Sep 9 05:11:16.307771 systemd-networkd[1454]: eth0: DHCPv4 address 10.0.0.133/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 05:11:16.308265 systemd-timesyncd[1455]: Network configuration changed, trying to establish connection. Sep 9 05:11:16.308344 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 05:11:16.309047 systemd-timesyncd[1455]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 9 05:11:16.309090 systemd-timesyncd[1455]: Initial clock synchronization to Tue 2025-09-09 05:11:16.207720 UTC. Sep 9 05:11:16.309498 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 05:11:16.310501 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 05:11:16.311524 systemd[1]: Reached target timers.target - Timer Units. Sep 9 05:11:16.312961 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 05:11:16.315109 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 05:11:16.318021 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 9 05:11:16.319274 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 9 05:11:16.320315 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 9 05:11:16.325783 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 05:11:16.327100 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 9 05:11:16.329305 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 9 05:11:16.331200 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 05:11:16.333461 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 05:11:16.343680 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 05:11:16.345589 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 05:11:16.346908 systemd[1]: Reached target basic.target - Basic System. Sep 9 05:11:16.348342 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 05:11:16.348371 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 05:11:16.349571 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 05:11:16.352297 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 05:11:16.354161 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 05:11:16.356608 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 05:11:16.359892 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 05:11:16.360728 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 05:11:16.361661 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 05:11:16.364928 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 05:11:16.365101 jq[1473]: false Sep 9 05:11:16.366667 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 05:11:16.369659 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 05:11:16.372204 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 05:11:16.376958 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 05:11:16.378688 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 05:11:16.379085 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 05:11:16.380079 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 05:11:16.384671 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 05:11:16.387135 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 9 05:11:16.389289 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 05:11:16.391003 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 05:11:16.391182 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 05:11:16.392960 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 05:11:16.393134 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 05:11:16.415094 jq[1494]: true Sep 9 05:11:16.418512 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 05:11:16.420871 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 05:11:16.426860 tar[1498]: linux-arm64/LICENSE Sep 9 05:11:16.427311 tar[1498]: linux-arm64/helm Sep 9 05:11:16.427960 update_engine[1490]: I20250909 05:11:16.427652 1490 main.cc:92] Flatcar Update Engine starting Sep 9 05:11:16.429346 dbus-daemon[1471]: [system] SELinux support is enabled Sep 9 05:11:16.429555 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 05:11:16.430991 (ntainerd)[1507]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 05:11:16.432812 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 05:11:16.432847 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 05:11:16.434733 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 05:11:16.434749 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 05:11:16.437069 systemd[1]: Started update-engine.service - Update Engine. Sep 9 05:11:16.438833 update_engine[1490]: I20250909 05:11:16.438252 1490 update_check_scheduler.cc:74] Next update check in 10m45s Sep 9 05:11:16.438984 jq[1514]: true Sep 9 05:11:16.440942 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 05:11:16.456870 extend-filesystems[1474]: Found /dev/vda6 Sep 9 05:11:16.459580 extend-filesystems[1474]: Found /dev/vda9 Sep 9 05:11:16.463592 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 05:11:16.473724 extend-filesystems[1474]: Checking size of /dev/vda9 Sep 9 05:11:16.503727 extend-filesystems[1474]: Resized partition /dev/vda9 Sep 9 05:11:16.507445 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:11:16.511133 extend-filesystems[1553]: resize2fs 1.47.3 (8-Jul-2025) Sep 9 05:11:16.533553 locksmithd[1522]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 05:11:16.601555 systemd-logind[1484]: Watching system buttons on /dev/input/event0 (Power Button) Sep 9 05:11:16.601933 systemd-logind[1484]: New seat seat0. Sep 9 05:11:16.602668 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 05:11:16.649712 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 9 05:11:16.659749 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:11:16.742816 containerd[1507]: time="2025-09-09T05:11:16Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 9 05:11:16.743493 containerd[1507]: time="2025-09-09T05:11:16.743459160Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 9 05:11:16.757737 containerd[1507]: time="2025-09-09T05:11:16.756940840Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.08µs" Sep 9 05:11:16.757737 containerd[1507]: time="2025-09-09T05:11:16.756977680Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 9 05:11:16.757737 containerd[1507]: time="2025-09-09T05:11:16.757003600Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 9 05:11:16.757737 containerd[1507]: time="2025-09-09T05:11:16.757136520Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 9 05:11:16.757737 containerd[1507]: time="2025-09-09T05:11:16.757150840Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 9 05:11:16.757737 containerd[1507]: time="2025-09-09T05:11:16.757174320Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 05:11:16.757737 containerd[1507]: time="2025-09-09T05:11:16.757219000Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 05:11:16.757737 containerd[1507]: time="2025-09-09T05:11:16.757230040Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 05:11:16.757737 containerd[1507]: time="2025-09-09T05:11:16.757444320Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 05:11:16.757737 containerd[1507]: time="2025-09-09T05:11:16.757458040Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 05:11:16.757737 containerd[1507]: time="2025-09-09T05:11:16.757467920Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 05:11:16.757737 containerd[1507]: time="2025-09-09T05:11:16.757475880Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 9 05:11:16.757986 containerd[1507]: time="2025-09-09T05:11:16.757539280Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 9 05:11:16.758113 containerd[1507]: time="2025-09-09T05:11:16.758091720Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 05:11:16.758196 containerd[1507]: time="2025-09-09T05:11:16.758180800Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 05:11:16.758243 containerd[1507]: time="2025-09-09T05:11:16.758230960Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 9 05:11:16.758324 containerd[1507]: time="2025-09-09T05:11:16.758310040Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 9 05:11:16.758644 containerd[1507]: time="2025-09-09T05:11:16.758605360Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 9 05:11:16.758741 containerd[1507]: time="2025-09-09T05:11:16.758724040Z" level=info msg="metadata content store policy set" policy=shared Sep 9 05:11:16.787816 tar[1498]: linux-arm64/README.md Sep 9 05:11:16.805325 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 05:11:16.890728 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 9 05:11:16.910021 extend-filesystems[1553]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 9 05:11:16.910021 extend-filesystems[1553]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 9 05:11:16.910021 extend-filesystems[1553]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 9 05:11:16.916643 extend-filesystems[1474]: Resized filesystem in /dev/vda9 Sep 9 05:11:16.912895 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 05:11:16.917861 bash[1549]: Updated "/home/core/.ssh/authorized_keys" Sep 9 05:11:16.913105 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 05:11:16.918065 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 05:11:16.918840 containerd[1507]: time="2025-09-09T05:11:16.918804360Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 9 05:11:16.918899 containerd[1507]: time="2025-09-09T05:11:16.918869360Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 9 05:11:16.918919 containerd[1507]: time="2025-09-09T05:11:16.918896320Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 9 05:11:16.918919 containerd[1507]: time="2025-09-09T05:11:16.918910120Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 9 05:11:16.918971 containerd[1507]: time="2025-09-09T05:11:16.918922920Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 9 05:11:16.918971 containerd[1507]: time="2025-09-09T05:11:16.918933800Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 9 05:11:16.918971 containerd[1507]: time="2025-09-09T05:11:16.918945160Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 9 05:11:16.918971 containerd[1507]: time="2025-09-09T05:11:16.918961960Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 9 05:11:16.919031 containerd[1507]: time="2025-09-09T05:11:16.918974520Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 9 05:11:16.919031 containerd[1507]: time="2025-09-09T05:11:16.918985960Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 9 05:11:16.919031 containerd[1507]: time="2025-09-09T05:11:16.918995680Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 9 05:11:16.919031 containerd[1507]: time="2025-09-09T05:11:16.919007800Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 9 05:11:16.919227 containerd[1507]: time="2025-09-09T05:11:16.919210480Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 9 05:11:16.919254 containerd[1507]: time="2025-09-09T05:11:16.919237720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 9 05:11:16.919278 containerd[1507]: time="2025-09-09T05:11:16.919253800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 9 05:11:16.919278 containerd[1507]: time="2025-09-09T05:11:16.919264520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 9 05:11:16.919310 containerd[1507]: time="2025-09-09T05:11:16.919277480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 9 05:11:16.919310 containerd[1507]: time="2025-09-09T05:11:16.919288240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 9 05:11:16.919310 containerd[1507]: time="2025-09-09T05:11:16.919299440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 9 05:11:16.919365 containerd[1507]: time="2025-09-09T05:11:16.919311640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 9 05:11:16.919365 containerd[1507]: time="2025-09-09T05:11:16.919323960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 9 05:11:16.919365 containerd[1507]: time="2025-09-09T05:11:16.919335000Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 9 05:11:16.919365 containerd[1507]: time="2025-09-09T05:11:16.919345400Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 9 05:11:16.920006 containerd[1507]: time="2025-09-09T05:11:16.919961640Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 9 05:11:16.920006 containerd[1507]: time="2025-09-09T05:11:16.919991640Z" level=info msg="Start snapshots syncer" Sep 9 05:11:16.920067 containerd[1507]: time="2025-09-09T05:11:16.920017040Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 9 05:11:16.920288 containerd[1507]: time="2025-09-09T05:11:16.920234640Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 9 05:11:16.920288 containerd[1507]: time="2025-09-09T05:11:16.920288280Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 9 05:11:16.920424 containerd[1507]: time="2025-09-09T05:11:16.920350800Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 9 05:11:16.920494 containerd[1507]: time="2025-09-09T05:11:16.920451680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 9 05:11:16.920494 containerd[1507]: time="2025-09-09T05:11:16.920489480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 9 05:11:16.920472 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 9 05:11:16.920602 containerd[1507]: time="2025-09-09T05:11:16.920501120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 9 05:11:16.920602 containerd[1507]: time="2025-09-09T05:11:16.920511080Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 9 05:11:16.920602 containerd[1507]: time="2025-09-09T05:11:16.920522640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 9 05:11:16.920602 containerd[1507]: time="2025-09-09T05:11:16.920534320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 9 05:11:16.920602 containerd[1507]: time="2025-09-09T05:11:16.920545200Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 9 05:11:16.921110 containerd[1507]: time="2025-09-09T05:11:16.921082040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 9 05:11:16.921235 containerd[1507]: time="2025-09-09T05:11:16.921219000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 9 05:11:16.921293 containerd[1507]: time="2025-09-09T05:11:16.921239360Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 9 05:11:16.921293 containerd[1507]: time="2025-09-09T05:11:16.921286480Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 05:11:16.921337 containerd[1507]: time="2025-09-09T05:11:16.921303040Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 05:11:16.921337 containerd[1507]: time="2025-09-09T05:11:16.921312760Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 05:11:16.921337 containerd[1507]: time="2025-09-09T05:11:16.921323480Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 05:11:16.921389 containerd[1507]: time="2025-09-09T05:11:16.921332000Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 9 05:11:16.921410 containerd[1507]: time="2025-09-09T05:11:16.921393640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 9 05:11:16.921410 containerd[1507]: time="2025-09-09T05:11:16.921406280Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 9 05:11:16.921514 containerd[1507]: time="2025-09-09T05:11:16.921498840Z" level=info msg="runtime interface created" Sep 9 05:11:16.921514 containerd[1507]: time="2025-09-09T05:11:16.921509520Z" level=info msg="created NRI interface" Sep 9 05:11:16.921567 containerd[1507]: time="2025-09-09T05:11:16.921521800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 9 05:11:16.921567 containerd[1507]: time="2025-09-09T05:11:16.921533760Z" level=info msg="Connect containerd service" Sep 9 05:11:16.921567 containerd[1507]: time="2025-09-09T05:11:16.921559640Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 05:11:16.922398 containerd[1507]: time="2025-09-09T05:11:16.922352760Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 05:11:16.992267 containerd[1507]: time="2025-09-09T05:11:16.992149240Z" level=info msg="Start subscribing containerd event" Sep 9 05:11:16.995046 containerd[1507]: time="2025-09-09T05:11:16.992188360Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 05:11:16.995046 containerd[1507]: time="2025-09-09T05:11:16.992722040Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 05:11:16.995046 containerd[1507]: time="2025-09-09T05:11:16.992220400Z" level=info msg="Start recovering state" Sep 9 05:11:16.995046 containerd[1507]: time="2025-09-09T05:11:16.992843240Z" level=info msg="Start event monitor" Sep 9 05:11:16.995046 containerd[1507]: time="2025-09-09T05:11:16.992857040Z" level=info msg="Start cni network conf syncer for default" Sep 9 05:11:16.995046 containerd[1507]: time="2025-09-09T05:11:16.992869160Z" level=info msg="Start streaming server" Sep 9 05:11:16.995046 containerd[1507]: time="2025-09-09T05:11:16.992880680Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 9 05:11:16.995046 containerd[1507]: time="2025-09-09T05:11:16.992888200Z" level=info msg="runtime interface starting up..." Sep 9 05:11:16.995046 containerd[1507]: time="2025-09-09T05:11:16.992898040Z" level=info msg="starting plugins..." Sep 9 05:11:16.995046 containerd[1507]: time="2025-09-09T05:11:16.992911440Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 9 05:11:16.995046 containerd[1507]: time="2025-09-09T05:11:16.993023720Z" level=info msg="containerd successfully booted in 0.250707s" Sep 9 05:11:16.993586 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 05:11:17.239551 sshd_keygen[1495]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 05:11:17.260032 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 05:11:17.263124 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 05:11:17.278589 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 05:11:17.279058 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 05:11:17.282442 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 05:11:17.305750 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 05:11:17.309494 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 05:11:17.311629 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 9 05:11:17.312893 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 05:11:17.640913 systemd-networkd[1454]: eth0: Gained IPv6LL Sep 9 05:11:17.644784 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 05:11:17.647261 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 05:11:17.650950 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 9 05:11:17.653400 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:11:17.655920 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 05:11:17.677779 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 05:11:17.679717 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 9 05:11:17.679899 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 9 05:11:17.681564 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 05:11:18.201215 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:11:18.203098 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 05:11:18.205189 (kubelet)[1628]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 05:11:18.205408 systemd[1]: Startup finished in 1.987s (kernel) + 5.236s (initrd) + 3.420s (userspace) = 10.644s. Sep 9 05:11:18.542779 kubelet[1628]: E0909 05:11:18.542642 1628 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 05:11:18.545225 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 05:11:18.545358 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 05:11:18.545841 systemd[1]: kubelet.service: Consumed 742ms CPU time, 256.4M memory peak. Sep 9 05:11:22.024734 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 05:11:22.025657 systemd[1]: Started sshd@0-10.0.0.133:22-10.0.0.1:54654.service - OpenSSH per-connection server daemon (10.0.0.1:54654). Sep 9 05:11:22.130885 sshd[1642]: Accepted publickey for core from 10.0.0.1 port 54654 ssh2: RSA SHA256:y2XmME+qZ8Vxpxr1aV4RZrrEEzQsCrVNEgcY8K5ZHGs Sep 9 05:11:22.132601 sshd-session[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:11:22.138395 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 05:11:22.139453 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 05:11:22.144766 systemd-logind[1484]: New session 1 of user core. Sep 9 05:11:22.159242 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 05:11:22.163055 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 05:11:22.180518 (systemd)[1647]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 05:11:22.182422 systemd-logind[1484]: New session c1 of user core. Sep 9 05:11:22.282345 systemd[1647]: Queued start job for default target default.target. Sep 9 05:11:22.299736 systemd[1647]: Created slice app.slice - User Application Slice. Sep 9 05:11:22.299762 systemd[1647]: Reached target paths.target - Paths. Sep 9 05:11:22.299804 systemd[1647]: Reached target timers.target - Timers. Sep 9 05:11:22.301009 systemd[1647]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 05:11:22.310319 systemd[1647]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 05:11:22.310386 systemd[1647]: Reached target sockets.target - Sockets. Sep 9 05:11:22.310425 systemd[1647]: Reached target basic.target - Basic System. Sep 9 05:11:22.310463 systemd[1647]: Reached target default.target - Main User Target. Sep 9 05:11:22.310491 systemd[1647]: Startup finished in 122ms. Sep 9 05:11:22.310620 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 05:11:22.312509 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 05:11:22.377841 systemd[1]: Started sshd@1-10.0.0.133:22-10.0.0.1:54656.service - OpenSSH per-connection server daemon (10.0.0.1:54656). Sep 9 05:11:22.436945 sshd[1658]: Accepted publickey for core from 10.0.0.1 port 54656 ssh2: RSA SHA256:y2XmME+qZ8Vxpxr1aV4RZrrEEzQsCrVNEgcY8K5ZHGs Sep 9 05:11:22.437969 sshd-session[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:11:22.442310 systemd-logind[1484]: New session 2 of user core. Sep 9 05:11:22.453838 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 05:11:22.503654 sshd[1661]: Connection closed by 10.0.0.1 port 54656 Sep 9 05:11:22.504529 sshd-session[1658]: pam_unix(sshd:session): session closed for user core Sep 9 05:11:22.513393 systemd[1]: sshd@1-10.0.0.133:22-10.0.0.1:54656.service: Deactivated successfully. Sep 9 05:11:22.515881 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 05:11:22.517200 systemd-logind[1484]: Session 2 logged out. Waiting for processes to exit. Sep 9 05:11:22.518354 systemd[1]: Started sshd@2-10.0.0.133:22-10.0.0.1:54664.service - OpenSSH per-connection server daemon (10.0.0.1:54664). Sep 9 05:11:22.519381 systemd-logind[1484]: Removed session 2. Sep 9 05:11:22.569009 sshd[1667]: Accepted publickey for core from 10.0.0.1 port 54664 ssh2: RSA SHA256:y2XmME+qZ8Vxpxr1aV4RZrrEEzQsCrVNEgcY8K5ZHGs Sep 9 05:11:22.570302 sshd-session[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:11:22.573775 systemd-logind[1484]: New session 3 of user core. Sep 9 05:11:22.583839 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 05:11:22.629768 sshd[1670]: Connection closed by 10.0.0.1 port 54664 Sep 9 05:11:22.630066 sshd-session[1667]: pam_unix(sshd:session): session closed for user core Sep 9 05:11:22.639527 systemd[1]: sshd@2-10.0.0.133:22-10.0.0.1:54664.service: Deactivated successfully. Sep 9 05:11:22.641807 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 05:11:22.642369 systemd-logind[1484]: Session 3 logged out. Waiting for processes to exit. Sep 9 05:11:22.646010 systemd[1]: Started sshd@3-10.0.0.133:22-10.0.0.1:54678.service - OpenSSH per-connection server daemon (10.0.0.1:54678). Sep 9 05:11:22.646601 systemd-logind[1484]: Removed session 3. Sep 9 05:11:22.700515 sshd[1676]: Accepted publickey for core from 10.0.0.1 port 54678 ssh2: RSA SHA256:y2XmME+qZ8Vxpxr1aV4RZrrEEzQsCrVNEgcY8K5ZHGs Sep 9 05:11:22.701511 sshd-session[1676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:11:22.704958 systemd-logind[1484]: New session 4 of user core. Sep 9 05:11:22.713821 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 05:11:22.764766 sshd[1679]: Connection closed by 10.0.0.1 port 54678 Sep 9 05:11:22.764675 sshd-session[1676]: pam_unix(sshd:session): session closed for user core Sep 9 05:11:22.776257 systemd[1]: sshd@3-10.0.0.133:22-10.0.0.1:54678.service: Deactivated successfully. Sep 9 05:11:22.777484 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 05:11:22.779154 systemd-logind[1484]: Session 4 logged out. Waiting for processes to exit. Sep 9 05:11:22.781085 systemd[1]: Started sshd@4-10.0.0.133:22-10.0.0.1:54684.service - OpenSSH per-connection server daemon (10.0.0.1:54684). Sep 9 05:11:22.781552 systemd-logind[1484]: Removed session 4. Sep 9 05:11:22.821485 sshd[1685]: Accepted publickey for core from 10.0.0.1 port 54684 ssh2: RSA SHA256:y2XmME+qZ8Vxpxr1aV4RZrrEEzQsCrVNEgcY8K5ZHGs Sep 9 05:11:22.822723 sshd-session[1685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:11:22.826673 systemd-logind[1484]: New session 5 of user core. Sep 9 05:11:22.832821 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 05:11:22.886988 sudo[1689]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 05:11:22.887240 sudo[1689]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 05:11:22.910739 sudo[1689]: pam_unix(sudo:session): session closed for user root Sep 9 05:11:22.912200 sshd[1688]: Connection closed by 10.0.0.1 port 54684 Sep 9 05:11:22.912549 sshd-session[1685]: pam_unix(sshd:session): session closed for user core Sep 9 05:11:22.930780 systemd[1]: sshd@4-10.0.0.133:22-10.0.0.1:54684.service: Deactivated successfully. Sep 9 05:11:22.932265 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 05:11:22.935915 systemd-logind[1484]: Session 5 logged out. Waiting for processes to exit. Sep 9 05:11:22.937620 systemd[1]: Started sshd@5-10.0.0.133:22-10.0.0.1:54686.service - OpenSSH per-connection server daemon (10.0.0.1:54686). Sep 9 05:11:22.938681 systemd-logind[1484]: Removed session 5. Sep 9 05:11:22.988211 sshd[1695]: Accepted publickey for core from 10.0.0.1 port 54686 ssh2: RSA SHA256:y2XmME+qZ8Vxpxr1aV4RZrrEEzQsCrVNEgcY8K5ZHGs Sep 9 05:11:22.989238 sshd-session[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:11:22.992863 systemd-logind[1484]: New session 6 of user core. Sep 9 05:11:23.003910 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 05:11:23.053454 sudo[1700]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 05:11:23.053738 sudo[1700]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 05:11:23.136190 sudo[1700]: pam_unix(sudo:session): session closed for user root Sep 9 05:11:23.141044 sudo[1699]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 9 05:11:23.141568 sudo[1699]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 05:11:23.150866 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 05:11:23.182527 augenrules[1722]: No rules Sep 9 05:11:23.183558 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 05:11:23.185746 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 05:11:23.186903 sudo[1699]: pam_unix(sudo:session): session closed for user root Sep 9 05:11:23.188095 sshd[1698]: Connection closed by 10.0.0.1 port 54686 Sep 9 05:11:23.188465 sshd-session[1695]: pam_unix(sshd:session): session closed for user core Sep 9 05:11:23.201635 systemd[1]: sshd@5-10.0.0.133:22-10.0.0.1:54686.service: Deactivated successfully. Sep 9 05:11:23.203107 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 05:11:23.203761 systemd-logind[1484]: Session 6 logged out. Waiting for processes to exit. Sep 9 05:11:23.205918 systemd[1]: Started sshd@6-10.0.0.133:22-10.0.0.1:54698.service - OpenSSH per-connection server daemon (10.0.0.1:54698). Sep 9 05:11:23.206372 systemd-logind[1484]: Removed session 6. Sep 9 05:11:23.255429 sshd[1731]: Accepted publickey for core from 10.0.0.1 port 54698 ssh2: RSA SHA256:y2XmME+qZ8Vxpxr1aV4RZrrEEzQsCrVNEgcY8K5ZHGs Sep 9 05:11:23.256395 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:11:23.259973 systemd-logind[1484]: New session 7 of user core. Sep 9 05:11:23.269838 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 05:11:23.320154 sudo[1735]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 05:11:23.320663 sudo[1735]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 05:11:23.577540 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 05:11:23.597046 (dockerd)[1757]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 05:11:23.785425 dockerd[1757]: time="2025-09-09T05:11:23.785373568Z" level=info msg="Starting up" Sep 9 05:11:23.787727 dockerd[1757]: time="2025-09-09T05:11:23.786118394Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 9 05:11:23.796198 dockerd[1757]: time="2025-09-09T05:11:23.796167907Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 9 05:11:23.829010 dockerd[1757]: time="2025-09-09T05:11:23.828913174Z" level=info msg="Loading containers: start." Sep 9 05:11:23.836721 kernel: Initializing XFRM netlink socket Sep 9 05:11:24.020427 systemd-networkd[1454]: docker0: Link UP Sep 9 05:11:24.023159 dockerd[1757]: time="2025-09-09T05:11:24.023128359Z" level=info msg="Loading containers: done." Sep 9 05:11:24.034628 dockerd[1757]: time="2025-09-09T05:11:24.034589434Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 05:11:24.034746 dockerd[1757]: time="2025-09-09T05:11:24.034663066Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 9 05:11:24.034773 dockerd[1757]: time="2025-09-09T05:11:24.034749634Z" level=info msg="Initializing buildkit" Sep 9 05:11:24.052900 dockerd[1757]: time="2025-09-09T05:11:24.052871479Z" level=info msg="Completed buildkit initialization" Sep 9 05:11:24.059107 dockerd[1757]: time="2025-09-09T05:11:24.059077721Z" level=info msg="Daemon has completed initialization" Sep 9 05:11:24.059248 dockerd[1757]: time="2025-09-09T05:11:24.059132646Z" level=info msg="API listen on /run/docker.sock" Sep 9 05:11:24.059414 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 05:11:24.661345 containerd[1507]: time="2025-09-09T05:11:24.661310989Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 9 05:11:25.237585 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2649346679.mount: Deactivated successfully. Sep 9 05:11:26.246631 containerd[1507]: time="2025-09-09T05:11:26.246585724Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:11:26.247730 containerd[1507]: time="2025-09-09T05:11:26.247707041Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=27352615" Sep 9 05:11:26.248673 containerd[1507]: time="2025-09-09T05:11:26.248633741Z" level=info msg="ImageCreate event name:\"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:11:26.251247 containerd[1507]: time="2025-09-09T05:11:26.251212053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:11:26.252812 containerd[1507]: time="2025-09-09T05:11:26.252774166Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"27349413\" in 1.591429694s" Sep 9 05:11:26.252890 containerd[1507]: time="2025-09-09T05:11:26.252819154Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\"" Sep 9 05:11:26.253909 containerd[1507]: time="2025-09-09T05:11:26.253883170Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 9 05:11:27.836001 containerd[1507]: time="2025-09-09T05:11:27.835953799Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:11:27.837007 containerd[1507]: time="2025-09-09T05:11:27.836746949Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=23536979" Sep 9 05:11:27.837674 containerd[1507]: time="2025-09-09T05:11:27.837631355Z" level=info msg="ImageCreate event name:\"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:11:27.840295 containerd[1507]: time="2025-09-09T05:11:27.840263362Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:11:27.841278 containerd[1507]: time="2025-09-09T05:11:27.841246677Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"25093155\" in 1.58733047s" Sep 9 05:11:27.841278 containerd[1507]: time="2025-09-09T05:11:27.841275700Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\"" Sep 9 05:11:27.841730 containerd[1507]: time="2025-09-09T05:11:27.841682262Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 9 05:11:28.795765 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 05:11:28.797504 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:11:29.063206 containerd[1507]: time="2025-09-09T05:11:29.063160210Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:11:29.064065 containerd[1507]: time="2025-09-09T05:11:29.063833091Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=18292016" Sep 9 05:11:29.064714 containerd[1507]: time="2025-09-09T05:11:29.064676416Z" level=info msg="ImageCreate event name:\"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:11:29.066827 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:11:29.069069 containerd[1507]: time="2025-09-09T05:11:29.068754434Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"19848210\" in 1.227019859s" Sep 9 05:11:29.069069 containerd[1507]: time="2025-09-09T05:11:29.068790900Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\"" Sep 9 05:11:29.069302 containerd[1507]: time="2025-09-09T05:11:29.069276779Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 9 05:11:29.069843 containerd[1507]: time="2025-09-09T05:11:29.069819232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:11:29.070683 (kubelet)[2045]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 05:11:29.104517 kubelet[2045]: E0909 05:11:29.104468 2045 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 05:11:29.107744 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 05:11:29.107890 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 05:11:29.108192 systemd[1]: kubelet.service: Consumed 142ms CPU time, 105.7M memory peak. Sep 9 05:11:30.086080 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2351359539.mount: Deactivated successfully. Sep 9 05:11:30.319312 containerd[1507]: time="2025-09-09T05:11:30.319241106Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:11:30.319994 containerd[1507]: time="2025-09-09T05:11:30.319962453Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=28199961" Sep 9 05:11:30.320635 containerd[1507]: time="2025-09-09T05:11:30.320603460Z" level=info msg="ImageCreate event name:\"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:11:30.322770 containerd[1507]: time="2025-09-09T05:11:30.322741082Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:11:30.323379 containerd[1507]: time="2025-09-09T05:11:30.323340502Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"28198978\" in 1.254027531s" Sep 9 05:11:30.323379 containerd[1507]: time="2025-09-09T05:11:30.323377060Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\"" Sep 9 05:11:30.323886 containerd[1507]: time="2025-09-09T05:11:30.323818952Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 9 05:11:30.865035 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2327388077.mount: Deactivated successfully. Sep 9 05:11:31.843202 containerd[1507]: time="2025-09-09T05:11:31.843119820Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:11:31.843681 containerd[1507]: time="2025-09-09T05:11:31.843641639Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Sep 9 05:11:31.844563 containerd[1507]: time="2025-09-09T05:11:31.844516369Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:11:31.847679 containerd[1507]: time="2025-09-09T05:11:31.847644372Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:11:31.849024 containerd[1507]: time="2025-09-09T05:11:31.848987306Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.525135305s" Sep 9 05:11:31.849024 containerd[1507]: time="2025-09-09T05:11:31.849022597Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Sep 9 05:11:31.849623 containerd[1507]: time="2025-09-09T05:11:31.849535953Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 05:11:32.268375 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3979617395.mount: Deactivated successfully. Sep 9 05:11:32.271296 containerd[1507]: time="2025-09-09T05:11:32.271247568Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 05:11:32.271692 containerd[1507]: time="2025-09-09T05:11:32.271664695Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 9 05:11:32.272612 containerd[1507]: time="2025-09-09T05:11:32.272583363Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 05:11:32.274399 containerd[1507]: time="2025-09-09T05:11:32.274373501Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 05:11:32.275368 containerd[1507]: time="2025-09-09T05:11:32.275343002Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 425.77495ms" Sep 9 05:11:32.275449 containerd[1507]: time="2025-09-09T05:11:32.275436602Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 9 05:11:32.275960 containerd[1507]: time="2025-09-09T05:11:32.275940620Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 9 05:11:32.683673 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1651368016.mount: Deactivated successfully. Sep 9 05:11:34.324191 containerd[1507]: time="2025-09-09T05:11:34.324126589Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:11:34.324539 containerd[1507]: time="2025-09-09T05:11:34.324504694Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465297" Sep 9 05:11:34.325449 containerd[1507]: time="2025-09-09T05:11:34.325413144Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:11:34.328517 containerd[1507]: time="2025-09-09T05:11:34.328477293Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:11:34.330327 containerd[1507]: time="2025-09-09T05:11:34.330291238Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.054237129s" Sep 9 05:11:34.330359 containerd[1507]: time="2025-09-09T05:11:34.330328030Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Sep 9 05:11:38.773314 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:11:38.773451 systemd[1]: kubelet.service: Consumed 142ms CPU time, 105.7M memory peak. Sep 9 05:11:38.775303 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:11:38.797081 systemd[1]: Reload requested from client PID 2204 ('systemctl') (unit session-7.scope)... Sep 9 05:11:38.797112 systemd[1]: Reloading... Sep 9 05:11:38.876736 zram_generator::config[2247]: No configuration found. Sep 9 05:11:39.050601 systemd[1]: Reloading finished in 253 ms. Sep 9 05:11:39.084045 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:11:39.086460 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:11:39.087806 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 05:11:39.088013 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:11:39.088059 systemd[1]: kubelet.service: Consumed 91ms CPU time, 95.1M memory peak. Sep 9 05:11:39.089464 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:11:39.210123 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:11:39.215390 (kubelet)[2294]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 05:11:39.249422 kubelet[2294]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 05:11:39.249422 kubelet[2294]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 05:11:39.249422 kubelet[2294]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 05:11:39.249803 kubelet[2294]: I0909 05:11:39.249451 2294 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 05:11:40.089955 kubelet[2294]: I0909 05:11:40.089904 2294 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 9 05:11:40.089955 kubelet[2294]: I0909 05:11:40.089935 2294 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 05:11:40.090171 kubelet[2294]: I0909 05:11:40.090153 2294 server.go:956] "Client rotation is on, will bootstrap in background" Sep 9 05:11:40.113456 kubelet[2294]: I0909 05:11:40.113087 2294 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 05:11:40.113981 kubelet[2294]: E0909 05:11:40.113947 2294 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.133:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 9 05:11:40.119504 kubelet[2294]: I0909 05:11:40.119481 2294 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 05:11:40.122118 kubelet[2294]: I0909 05:11:40.122101 2294 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 05:11:40.123116 kubelet[2294]: I0909 05:11:40.123071 2294 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 05:11:40.123330 kubelet[2294]: I0909 05:11:40.123114 2294 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 05:11:40.123427 kubelet[2294]: I0909 05:11:40.123393 2294 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 05:11:40.123427 kubelet[2294]: I0909 05:11:40.123404 2294 container_manager_linux.go:303] "Creating device plugin manager" Sep 9 05:11:40.123606 kubelet[2294]: I0909 05:11:40.123591 2294 state_mem.go:36] "Initialized new in-memory state store" Sep 9 05:11:40.126020 kubelet[2294]: I0909 05:11:40.126001 2294 kubelet.go:480] "Attempting to sync node with API server" Sep 9 05:11:40.126068 kubelet[2294]: I0909 05:11:40.126028 2294 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 05:11:40.126068 kubelet[2294]: I0909 05:11:40.126054 2294 kubelet.go:386] "Adding apiserver pod source" Sep 9 05:11:40.127167 kubelet[2294]: I0909 05:11:40.127055 2294 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 05:11:40.128012 kubelet[2294]: I0909 05:11:40.127995 2294 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 05:11:40.128815 kubelet[2294]: I0909 05:11:40.128781 2294 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 9 05:11:40.128947 kubelet[2294]: W0909 05:11:40.128932 2294 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 05:11:40.130457 kubelet[2294]: E0909 05:11:40.130416 2294 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 9 05:11:40.131861 kubelet[2294]: E0909 05:11:40.131825 2294 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 9 05:11:40.132022 kubelet[2294]: I0909 05:11:40.131993 2294 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 05:11:40.132057 kubelet[2294]: I0909 05:11:40.132044 2294 server.go:1289] "Started kubelet" Sep 9 05:11:40.132145 kubelet[2294]: I0909 05:11:40.132118 2294 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 05:11:40.137054 kubelet[2294]: I0909 05:11:40.137003 2294 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 05:11:40.137954 kubelet[2294]: E0909 05:11:40.136481 2294 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.133:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.133:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863852b2f85dc1d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 05:11:40.132015133 +0000 UTC m=+0.913160431,LastTimestamp:2025-09-09 05:11:40.132015133 +0000 UTC m=+0.913160431,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 05:11:40.138053 kubelet[2294]: I0909 05:11:40.138043 2294 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 05:11:40.138315 kubelet[2294]: I0909 05:11:40.138294 2294 server.go:317] "Adding debug handlers to kubelet server" Sep 9 05:11:40.140311 kubelet[2294]: E0909 05:11:40.140217 2294 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:11:40.140311 kubelet[2294]: I0909 05:11:40.140263 2294 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 05:11:40.140819 kubelet[2294]: I0909 05:11:40.140513 2294 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 05:11:40.140819 kubelet[2294]: I0909 05:11:40.140579 2294 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 05:11:40.140819 kubelet[2294]: I0909 05:11:40.140634 2294 reconciler.go:26] "Reconciler: start to sync state" Sep 9 05:11:40.141384 kubelet[2294]: I0909 05:11:40.140928 2294 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 05:11:40.141384 kubelet[2294]: E0909 05:11:40.141202 2294 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 9 05:11:40.141576 kubelet[2294]: I0909 05:11:40.141429 2294 factory.go:223] Registration of the systemd container factory successfully Sep 9 05:11:40.141576 kubelet[2294]: E0909 05:11:40.141523 2294 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.133:6443: connect: connection refused" interval="200ms" Sep 9 05:11:40.141637 kubelet[2294]: I0909 05:11:40.141575 2294 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 05:11:40.142495 kubelet[2294]: E0909 05:11:40.142125 2294 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 05:11:40.143054 kubelet[2294]: I0909 05:11:40.143034 2294 factory.go:223] Registration of the containerd container factory successfully Sep 9 05:11:40.152076 kubelet[2294]: I0909 05:11:40.151887 2294 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 05:11:40.152076 kubelet[2294]: I0909 05:11:40.151903 2294 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 05:11:40.152076 kubelet[2294]: I0909 05:11:40.151919 2294 state_mem.go:36] "Initialized new in-memory state store" Sep 9 05:11:40.155801 kubelet[2294]: I0909 05:11:40.155766 2294 policy_none.go:49] "None policy: Start" Sep 9 05:11:40.155801 kubelet[2294]: I0909 05:11:40.155796 2294 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 05:11:40.155903 kubelet[2294]: I0909 05:11:40.155808 2294 state_mem.go:35] "Initializing new in-memory state store" Sep 9 05:11:40.160427 kubelet[2294]: I0909 05:11:40.160363 2294 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 9 05:11:40.161692 kubelet[2294]: I0909 05:11:40.161657 2294 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 9 05:11:40.161692 kubelet[2294]: I0909 05:11:40.161682 2294 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 9 05:11:40.161795 kubelet[2294]: I0909 05:11:40.161714 2294 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 05:11:40.161795 kubelet[2294]: I0909 05:11:40.161737 2294 kubelet.go:2436] "Starting kubelet main sync loop" Sep 9 05:11:40.161795 kubelet[2294]: E0909 05:11:40.161778 2294 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 05:11:40.162347 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 05:11:40.163784 kubelet[2294]: E0909 05:11:40.163672 2294 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 9 05:11:40.175594 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 05:11:40.178475 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 05:11:40.195682 kubelet[2294]: E0909 05:11:40.195645 2294 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 9 05:11:40.195916 kubelet[2294]: I0909 05:11:40.195882 2294 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 05:11:40.195949 kubelet[2294]: I0909 05:11:40.195903 2294 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 05:11:40.196171 kubelet[2294]: I0909 05:11:40.196142 2294 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 05:11:40.197052 kubelet[2294]: E0909 05:11:40.197032 2294 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 05:11:40.197168 kubelet[2294]: E0909 05:11:40.197145 2294 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 9 05:11:40.272305 systemd[1]: Created slice kubepods-burstable-pode42124a05faa2685aaa060b674e4bf70.slice - libcontainer container kubepods-burstable-pode42124a05faa2685aaa060b674e4bf70.slice. Sep 9 05:11:40.282752 kubelet[2294]: E0909 05:11:40.282409 2294 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 05:11:40.284859 systemd[1]: Created slice kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice - libcontainer container kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice. Sep 9 05:11:40.298037 kubelet[2294]: I0909 05:11:40.297991 2294 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 05:11:40.298581 kubelet[2294]: E0909 05:11:40.298551 2294 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.133:6443/api/v1/nodes\": dial tcp 10.0.0.133:6443: connect: connection refused" node="localhost" Sep 9 05:11:40.303960 kubelet[2294]: E0909 05:11:40.303814 2294 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 05:11:40.306128 systemd[1]: Created slice kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice - libcontainer container kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice. Sep 9 05:11:40.307831 kubelet[2294]: E0909 05:11:40.307654 2294 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 05:11:40.341924 kubelet[2294]: I0909 05:11:40.341837 2294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e42124a05faa2685aaa060b674e4bf70-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e42124a05faa2685aaa060b674e4bf70\") " pod="kube-system/kube-apiserver-localhost" Sep 9 05:11:40.341924 kubelet[2294]: I0909 05:11:40.341870 2294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e42124a05faa2685aaa060b674e4bf70-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e42124a05faa2685aaa060b674e4bf70\") " pod="kube-system/kube-apiserver-localhost" Sep 9 05:11:40.341924 kubelet[2294]: I0909 05:11:40.341890 2294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e42124a05faa2685aaa060b674e4bf70-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e42124a05faa2685aaa060b674e4bf70\") " pod="kube-system/kube-apiserver-localhost" Sep 9 05:11:40.341924 kubelet[2294]: I0909 05:11:40.341907 2294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:11:40.341924 kubelet[2294]: I0909 05:11:40.341922 2294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:11:40.342082 kubelet[2294]: I0909 05:11:40.341940 2294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:11:40.342082 kubelet[2294]: I0909 05:11:40.341953 2294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:11:40.342082 kubelet[2294]: I0909 05:11:40.341966 2294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:11:40.342082 kubelet[2294]: I0909 05:11:40.341980 2294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 9 05:11:40.342455 kubelet[2294]: E0909 05:11:40.342424 2294 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.133:6443: connect: connection refused" interval="400ms" Sep 9 05:11:40.500215 kubelet[2294]: I0909 05:11:40.499815 2294 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 05:11:40.500215 kubelet[2294]: E0909 05:11:40.500121 2294 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.133:6443/api/v1/nodes\": dial tcp 10.0.0.133:6443: connect: connection refused" node="localhost" Sep 9 05:11:40.583473 containerd[1507]: time="2025-09-09T05:11:40.583423857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e42124a05faa2685aaa060b674e4bf70,Namespace:kube-system,Attempt:0,}" Sep 9 05:11:40.603462 containerd[1507]: time="2025-09-09T05:11:40.603345601Z" level=info msg="connecting to shim 184873c1ed5180f28f3d200acee813d1598c2d07561fc176c045a247ab10c265" address="unix:///run/containerd/s/249ef372153744d2a4b184750b75029f454b84b8e2cff698de25306056a4bed5" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:11:40.605381 containerd[1507]: time="2025-09-09T05:11:40.605346826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,}" Sep 9 05:11:40.609068 containerd[1507]: time="2025-09-09T05:11:40.609033061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,}" Sep 9 05:11:40.629384 containerd[1507]: time="2025-09-09T05:11:40.628867456Z" level=info msg="connecting to shim d0d663a72299a96682d85e55149295a808d3034e86b68562d26541932b8126a9" address="unix:///run/containerd/s/fd1bf9804ab9fa941602a8944432d7d259ad1ddaf1ed558c26297ef4d351687c" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:11:40.629925 systemd[1]: Started cri-containerd-184873c1ed5180f28f3d200acee813d1598c2d07561fc176c045a247ab10c265.scope - libcontainer container 184873c1ed5180f28f3d200acee813d1598c2d07561fc176c045a247ab10c265. Sep 9 05:11:40.637171 containerd[1507]: time="2025-09-09T05:11:40.637121170Z" level=info msg="connecting to shim 9b6393b70403954693b7d00d82e389884c5533aa7e4b9d95e66521e51f5b0495" address="unix:///run/containerd/s/93953e11e00590a9f7097902ba2d598c99c29ba3e2d524a0cce251c3b7941f28" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:11:40.661896 systemd[1]: Started cri-containerd-d0d663a72299a96682d85e55149295a808d3034e86b68562d26541932b8126a9.scope - libcontainer container d0d663a72299a96682d85e55149295a808d3034e86b68562d26541932b8126a9. Sep 9 05:11:40.665059 systemd[1]: Started cri-containerd-9b6393b70403954693b7d00d82e389884c5533aa7e4b9d95e66521e51f5b0495.scope - libcontainer container 9b6393b70403954693b7d00d82e389884c5533aa7e4b9d95e66521e51f5b0495. Sep 9 05:11:40.679671 containerd[1507]: time="2025-09-09T05:11:40.679626054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e42124a05faa2685aaa060b674e4bf70,Namespace:kube-system,Attempt:0,} returns sandbox id \"184873c1ed5180f28f3d200acee813d1598c2d07561fc176c045a247ab10c265\"" Sep 9 05:11:40.687024 containerd[1507]: time="2025-09-09T05:11:40.686990450Z" level=info msg="CreateContainer within sandbox \"184873c1ed5180f28f3d200acee813d1598c2d07561fc176c045a247ab10c265\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 05:11:40.691795 containerd[1507]: time="2025-09-09T05:11:40.691759410Z" level=info msg="Container fa66a2ffbc267b547c785c51124aa582dc216ad06c9d8f36696069e55cdd300f: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:11:40.700454 containerd[1507]: time="2025-09-09T05:11:40.700414289Z" level=info msg="CreateContainer within sandbox \"184873c1ed5180f28f3d200acee813d1598c2d07561fc176c045a247ab10c265\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fa66a2ffbc267b547c785c51124aa582dc216ad06c9d8f36696069e55cdd300f\"" Sep 9 05:11:40.701078 containerd[1507]: time="2025-09-09T05:11:40.701051594Z" level=info msg="StartContainer for \"fa66a2ffbc267b547c785c51124aa582dc216ad06c9d8f36696069e55cdd300f\"" Sep 9 05:11:40.702149 containerd[1507]: time="2025-09-09T05:11:40.702124325Z" level=info msg="connecting to shim fa66a2ffbc267b547c785c51124aa582dc216ad06c9d8f36696069e55cdd300f" address="unix:///run/containerd/s/249ef372153744d2a4b184750b75029f454b84b8e2cff698de25306056a4bed5" protocol=ttrpc version=3 Sep 9 05:11:40.707617 containerd[1507]: time="2025-09-09T05:11:40.707581401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,} returns sandbox id \"d0d663a72299a96682d85e55149295a808d3034e86b68562d26541932b8126a9\"" Sep 9 05:11:40.710231 containerd[1507]: time="2025-09-09T05:11:40.710201662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b6393b70403954693b7d00d82e389884c5533aa7e4b9d95e66521e51f5b0495\"" Sep 9 05:11:40.711859 containerd[1507]: time="2025-09-09T05:11:40.711818153Z" level=info msg="CreateContainer within sandbox \"d0d663a72299a96682d85e55149295a808d3034e86b68562d26541932b8126a9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 05:11:40.713934 containerd[1507]: time="2025-09-09T05:11:40.713903808Z" level=info msg="CreateContainer within sandbox \"9b6393b70403954693b7d00d82e389884c5533aa7e4b9d95e66521e51f5b0495\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 05:11:40.720722 containerd[1507]: time="2025-09-09T05:11:40.720193875Z" level=info msg="Container 901c7492ed183049b26cc89b4d188196191a029ab83c6acd5081a5e5fe594917: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:11:40.723048 containerd[1507]: time="2025-09-09T05:11:40.723015978Z" level=info msg="Container 7dff02951e34a536f479c3512ca81c016b8e68797cd5de88c4e999d3ba860a5e: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:11:40.727031 containerd[1507]: time="2025-09-09T05:11:40.726999719Z" level=info msg="CreateContainer within sandbox \"d0d663a72299a96682d85e55149295a808d3034e86b68562d26541932b8126a9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"901c7492ed183049b26cc89b4d188196191a029ab83c6acd5081a5e5fe594917\"" Sep 9 05:11:40.727455 containerd[1507]: time="2025-09-09T05:11:40.727430027Z" level=info msg="StartContainer for \"901c7492ed183049b26cc89b4d188196191a029ab83c6acd5081a5e5fe594917\"" Sep 9 05:11:40.727884 systemd[1]: Started cri-containerd-fa66a2ffbc267b547c785c51124aa582dc216ad06c9d8f36696069e55cdd300f.scope - libcontainer container fa66a2ffbc267b547c785c51124aa582dc216ad06c9d8f36696069e55cdd300f. Sep 9 05:11:40.729300 containerd[1507]: time="2025-09-09T05:11:40.729243482Z" level=info msg="CreateContainer within sandbox \"9b6393b70403954693b7d00d82e389884c5533aa7e4b9d95e66521e51f5b0495\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7dff02951e34a536f479c3512ca81c016b8e68797cd5de88c4e999d3ba860a5e\"" Sep 9 05:11:40.729502 containerd[1507]: time="2025-09-09T05:11:40.729405947Z" level=info msg="connecting to shim 901c7492ed183049b26cc89b4d188196191a029ab83c6acd5081a5e5fe594917" address="unix:///run/containerd/s/fd1bf9804ab9fa941602a8944432d7d259ad1ddaf1ed558c26297ef4d351687c" protocol=ttrpc version=3 Sep 9 05:11:40.729816 containerd[1507]: time="2025-09-09T05:11:40.729771772Z" level=info msg="StartContainer for \"7dff02951e34a536f479c3512ca81c016b8e68797cd5de88c4e999d3ba860a5e\"" Sep 9 05:11:40.731494 containerd[1507]: time="2025-09-09T05:11:40.731446788Z" level=info msg="connecting to shim 7dff02951e34a536f479c3512ca81c016b8e68797cd5de88c4e999d3ba860a5e" address="unix:///run/containerd/s/93953e11e00590a9f7097902ba2d598c99c29ba3e2d524a0cce251c3b7941f28" protocol=ttrpc version=3 Sep 9 05:11:40.743973 kubelet[2294]: E0909 05:11:40.743937 2294 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.133:6443: connect: connection refused" interval="800ms" Sep 9 05:11:40.751918 systemd[1]: Started cri-containerd-901c7492ed183049b26cc89b4d188196191a029ab83c6acd5081a5e5fe594917.scope - libcontainer container 901c7492ed183049b26cc89b4d188196191a029ab83c6acd5081a5e5fe594917. Sep 9 05:11:40.754197 systemd[1]: Started cri-containerd-7dff02951e34a536f479c3512ca81c016b8e68797cd5de88c4e999d3ba860a5e.scope - libcontainer container 7dff02951e34a536f479c3512ca81c016b8e68797cd5de88c4e999d3ba860a5e. Sep 9 05:11:40.786217 containerd[1507]: time="2025-09-09T05:11:40.786140956Z" level=info msg="StartContainer for \"fa66a2ffbc267b547c785c51124aa582dc216ad06c9d8f36696069e55cdd300f\" returns successfully" Sep 9 05:11:40.803681 containerd[1507]: time="2025-09-09T05:11:40.803648237Z" level=info msg="StartContainer for \"901c7492ed183049b26cc89b4d188196191a029ab83c6acd5081a5e5fe594917\" returns successfully" Sep 9 05:11:40.813440 containerd[1507]: time="2025-09-09T05:11:40.813346623Z" level=info msg="StartContainer for \"7dff02951e34a536f479c3512ca81c016b8e68797cd5de88c4e999d3ba860a5e\" returns successfully" Sep 9 05:11:40.901767 kubelet[2294]: I0909 05:11:40.901625 2294 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 05:11:41.172415 kubelet[2294]: E0909 05:11:41.172323 2294 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 05:11:41.173168 kubelet[2294]: E0909 05:11:41.173140 2294 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 05:11:41.173711 kubelet[2294]: E0909 05:11:41.173688 2294 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 05:11:42.176303 kubelet[2294]: E0909 05:11:42.176271 2294 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 05:11:42.177945 kubelet[2294]: E0909 05:11:42.176777 2294 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 05:11:42.177945 kubelet[2294]: E0909 05:11:42.176907 2294 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 05:11:42.581198 kubelet[2294]: E0909 05:11:42.581090 2294 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 9 05:11:42.769601 kubelet[2294]: I0909 05:11:42.769566 2294 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 05:11:42.842862 kubelet[2294]: I0909 05:11:42.842753 2294 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 05:11:42.847710 kubelet[2294]: E0909 05:11:42.847675 2294 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 9 05:11:42.847766 kubelet[2294]: I0909 05:11:42.847714 2294 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 05:11:42.849403 kubelet[2294]: E0909 05:11:42.849382 2294 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 9 05:11:42.849403 kubelet[2294]: I0909 05:11:42.849403 2294 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 05:11:42.850744 kubelet[2294]: E0909 05:11:42.850721 2294 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 9 05:11:43.131263 kubelet[2294]: I0909 05:11:43.131146 2294 apiserver.go:52] "Watching apiserver" Sep 9 05:11:43.140997 kubelet[2294]: I0909 05:11:43.140945 2294 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 05:11:44.370094 kubelet[2294]: I0909 05:11:44.370064 2294 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 05:11:44.654277 systemd[1]: Reload requested from client PID 2581 ('systemctl') (unit session-7.scope)... Sep 9 05:11:44.654293 systemd[1]: Reloading... Sep 9 05:11:44.726747 zram_generator::config[2627]: No configuration found. Sep 9 05:11:44.890093 systemd[1]: Reloading finished in 235 ms. Sep 9 05:11:44.920559 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:11:44.932601 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 05:11:44.932851 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:11:44.932905 systemd[1]: kubelet.service: Consumed 1.296s CPU time, 129.4M memory peak. Sep 9 05:11:44.934594 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:11:45.074235 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:11:45.077615 (kubelet)[2666]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 05:11:45.117292 kubelet[2666]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 05:11:45.117292 kubelet[2666]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 05:11:45.117292 kubelet[2666]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 05:11:45.117606 kubelet[2666]: I0909 05:11:45.117335 2666 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 05:11:45.122563 kubelet[2666]: I0909 05:11:45.122527 2666 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 9 05:11:45.122563 kubelet[2666]: I0909 05:11:45.122558 2666 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 05:11:45.122791 kubelet[2666]: I0909 05:11:45.122776 2666 server.go:956] "Client rotation is on, will bootstrap in background" Sep 9 05:11:45.123970 kubelet[2666]: I0909 05:11:45.123946 2666 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 9 05:11:45.126066 kubelet[2666]: I0909 05:11:45.126037 2666 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 05:11:45.130552 kubelet[2666]: I0909 05:11:45.130529 2666 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 05:11:45.133751 kubelet[2666]: I0909 05:11:45.133126 2666 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 05:11:45.133751 kubelet[2666]: I0909 05:11:45.133342 2666 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 05:11:45.133751 kubelet[2666]: I0909 05:11:45.133364 2666 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 05:11:45.133751 kubelet[2666]: I0909 05:11:45.133544 2666 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 05:11:45.133960 kubelet[2666]: I0909 05:11:45.133553 2666 container_manager_linux.go:303] "Creating device plugin manager" Sep 9 05:11:45.133960 kubelet[2666]: I0909 05:11:45.133598 2666 state_mem.go:36] "Initialized new in-memory state store" Sep 9 05:11:45.133960 kubelet[2666]: I0909 05:11:45.133769 2666 kubelet.go:480] "Attempting to sync node with API server" Sep 9 05:11:45.133960 kubelet[2666]: I0909 05:11:45.133781 2666 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 05:11:45.133960 kubelet[2666]: I0909 05:11:45.133803 2666 kubelet.go:386] "Adding apiserver pod source" Sep 9 05:11:45.133960 kubelet[2666]: I0909 05:11:45.133815 2666 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 05:11:45.134497 kubelet[2666]: I0909 05:11:45.134456 2666 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 05:11:45.138126 kubelet[2666]: I0909 05:11:45.138095 2666 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 9 05:11:45.144511 kubelet[2666]: I0909 05:11:45.144488 2666 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 05:11:45.144586 kubelet[2666]: I0909 05:11:45.144531 2666 server.go:1289] "Started kubelet" Sep 9 05:11:45.146551 kubelet[2666]: I0909 05:11:45.144662 2666 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 05:11:45.148751 kubelet[2666]: I0909 05:11:45.147402 2666 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 05:11:45.149026 kubelet[2666]: I0909 05:11:45.149006 2666 server.go:317] "Adding debug handlers to kubelet server" Sep 9 05:11:45.152734 kubelet[2666]: I0909 05:11:45.152650 2666 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 05:11:45.154769 kubelet[2666]: I0909 05:11:45.154718 2666 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 05:11:45.155062 kubelet[2666]: E0909 05:11:45.155040 2666 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:11:45.156099 kubelet[2666]: I0909 05:11:45.156064 2666 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 05:11:45.156557 kubelet[2666]: I0909 05:11:45.156242 2666 reconciler.go:26] "Reconciler: start to sync state" Sep 9 05:11:45.157445 kubelet[2666]: I0909 05:11:45.157370 2666 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 05:11:45.158734 kubelet[2666]: I0909 05:11:45.157596 2666 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 05:11:45.158734 kubelet[2666]: E0909 05:11:45.158252 2666 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 05:11:45.164332 kubelet[2666]: I0909 05:11:45.164285 2666 factory.go:223] Registration of the containerd container factory successfully Sep 9 05:11:45.164332 kubelet[2666]: I0909 05:11:45.164310 2666 factory.go:223] Registration of the systemd container factory successfully Sep 9 05:11:45.164423 kubelet[2666]: I0909 05:11:45.164381 2666 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 05:11:45.169396 kubelet[2666]: I0909 05:11:45.169252 2666 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 9 05:11:45.170445 kubelet[2666]: I0909 05:11:45.170425 2666 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 9 05:11:45.170520 kubelet[2666]: I0909 05:11:45.170511 2666 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 9 05:11:45.170597 kubelet[2666]: I0909 05:11:45.170586 2666 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 05:11:45.170650 kubelet[2666]: I0909 05:11:45.170641 2666 kubelet.go:2436] "Starting kubelet main sync loop" Sep 9 05:11:45.170756 kubelet[2666]: E0909 05:11:45.170737 2666 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 05:11:45.196928 kubelet[2666]: I0909 05:11:45.196841 2666 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 05:11:45.197028 kubelet[2666]: I0909 05:11:45.197013 2666 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 05:11:45.197092 kubelet[2666]: I0909 05:11:45.197083 2666 state_mem.go:36] "Initialized new in-memory state store" Sep 9 05:11:45.197259 kubelet[2666]: I0909 05:11:45.197243 2666 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 05:11:45.197337 kubelet[2666]: I0909 05:11:45.197314 2666 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 05:11:45.197390 kubelet[2666]: I0909 05:11:45.197380 2666 policy_none.go:49] "None policy: Start" Sep 9 05:11:45.197438 kubelet[2666]: I0909 05:11:45.197429 2666 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 05:11:45.197492 kubelet[2666]: I0909 05:11:45.197484 2666 state_mem.go:35] "Initializing new in-memory state store" Sep 9 05:11:45.197643 kubelet[2666]: I0909 05:11:45.197629 2666 state_mem.go:75] "Updated machine memory state" Sep 9 05:11:45.203862 kubelet[2666]: E0909 05:11:45.203837 2666 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 9 05:11:45.204114 kubelet[2666]: I0909 05:11:45.204090 2666 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 05:11:45.204199 kubelet[2666]: I0909 05:11:45.204165 2666 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 05:11:45.204378 kubelet[2666]: I0909 05:11:45.204363 2666 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 05:11:45.206725 kubelet[2666]: E0909 05:11:45.205381 2666 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 05:11:45.271593 kubelet[2666]: I0909 05:11:45.271534 2666 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 05:11:45.271747 kubelet[2666]: I0909 05:11:45.271716 2666 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 05:11:45.273577 kubelet[2666]: I0909 05:11:45.273554 2666 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 05:11:45.276056 kubelet[2666]: E0909 05:11:45.276024 2666 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 05:11:45.306402 kubelet[2666]: I0909 05:11:45.306367 2666 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 05:11:45.312626 kubelet[2666]: I0909 05:11:45.312600 2666 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 9 05:11:45.312693 kubelet[2666]: I0909 05:11:45.312675 2666 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 05:11:45.357171 kubelet[2666]: I0909 05:11:45.356927 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e42124a05faa2685aaa060b674e4bf70-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e42124a05faa2685aaa060b674e4bf70\") " pod="kube-system/kube-apiserver-localhost" Sep 9 05:11:45.357171 kubelet[2666]: I0909 05:11:45.356964 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e42124a05faa2685aaa060b674e4bf70-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e42124a05faa2685aaa060b674e4bf70\") " pod="kube-system/kube-apiserver-localhost" Sep 9 05:11:45.357171 kubelet[2666]: I0909 05:11:45.356998 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:11:45.357171 kubelet[2666]: I0909 05:11:45.357018 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:11:45.357171 kubelet[2666]: I0909 05:11:45.357035 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:11:45.357346 kubelet[2666]: I0909 05:11:45.357051 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 9 05:11:45.357346 kubelet[2666]: I0909 05:11:45.357072 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e42124a05faa2685aaa060b674e4bf70-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e42124a05faa2685aaa060b674e4bf70\") " pod="kube-system/kube-apiserver-localhost" Sep 9 05:11:45.357346 kubelet[2666]: I0909 05:11:45.357088 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:11:45.357346 kubelet[2666]: I0909 05:11:45.357104 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:11:45.651150 sudo[2705]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 9 05:11:45.651400 sudo[2705]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 9 05:11:45.965304 sudo[2705]: pam_unix(sudo:session): session closed for user root Sep 9 05:11:46.135259 kubelet[2666]: I0909 05:11:46.135222 2666 apiserver.go:52] "Watching apiserver" Sep 9 05:11:46.156389 kubelet[2666]: I0909 05:11:46.156369 2666 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 05:11:46.184580 kubelet[2666]: I0909 05:11:46.183908 2666 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 05:11:46.187952 kubelet[2666]: E0909 05:11:46.187920 2666 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 9 05:11:46.206986 kubelet[2666]: I0909 05:11:46.206855 2666 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.206844713 podStartE2EDuration="2.206844713s" podCreationTimestamp="2025-09-09 05:11:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:11:46.200901918 +0000 UTC m=+1.119711598" watchObservedRunningTime="2025-09-09 05:11:46.206844713 +0000 UTC m=+1.125654353" Sep 9 05:11:46.207132 kubelet[2666]: I0909 05:11:46.207099 2666 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.20696892 podStartE2EDuration="1.20696892s" podCreationTimestamp="2025-09-09 05:11:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:11:46.206958723 +0000 UTC m=+1.125768403" watchObservedRunningTime="2025-09-09 05:11:46.20696892 +0000 UTC m=+1.125778560" Sep 9 05:11:46.223647 kubelet[2666]: I0909 05:11:46.223542 2666 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.223524719 podStartE2EDuration="1.223524719s" podCreationTimestamp="2025-09-09 05:11:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:11:46.215489996 +0000 UTC m=+1.134299676" watchObservedRunningTime="2025-09-09 05:11:46.223524719 +0000 UTC m=+1.142334399" Sep 9 05:11:47.620327 sudo[1735]: pam_unix(sudo:session): session closed for user root Sep 9 05:11:47.621454 sshd[1734]: Connection closed by 10.0.0.1 port 54698 Sep 9 05:11:47.621925 sshd-session[1731]: pam_unix(sshd:session): session closed for user core Sep 9 05:11:47.625258 systemd-logind[1484]: Session 7 logged out. Waiting for processes to exit. Sep 9 05:11:47.626518 systemd[1]: sshd@6-10.0.0.133:22-10.0.0.1:54698.service: Deactivated successfully. Sep 9 05:11:47.628641 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 05:11:47.629799 systemd[1]: session-7.scope: Consumed 6.426s CPU time, 259M memory peak. Sep 9 05:11:47.632150 systemd-logind[1484]: Removed session 7. Sep 9 05:11:50.017456 kubelet[2666]: I0909 05:11:50.017426 2666 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 05:11:50.018173 containerd[1507]: time="2025-09-09T05:11:50.018138373Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 05:11:50.018500 kubelet[2666]: I0909 05:11:50.018300 2666 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 05:11:51.122070 systemd[1]: Created slice kubepods-besteffort-podac8ff5bd_4f07_410d_b2e6_73bbd2ae097c.slice - libcontainer container kubepods-besteffort-podac8ff5bd_4f07_410d_b2e6_73bbd2ae097c.slice. Sep 9 05:11:51.141902 systemd[1]: Created slice kubepods-burstable-pod938d4635_5470_4017_8c4f_e2705575ba8a.slice - libcontainer container kubepods-burstable-pod938d4635_5470_4017_8c4f_e2705575ba8a.slice. Sep 9 05:11:51.198725 kubelet[2666]: I0909 05:11:51.197658 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ac8ff5bd-4f07-410d-b2e6-73bbd2ae097c-xtables-lock\") pod \"kube-proxy-rcqtd\" (UID: \"ac8ff5bd-4f07-410d-b2e6-73bbd2ae097c\") " pod="kube-system/kube-proxy-rcqtd" Sep 9 05:11:51.198725 kubelet[2666]: I0909 05:11:51.197759 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/938d4635-5470-4017-8c4f-e2705575ba8a-cni-path\") pod \"cilium-cr4h4\" (UID: \"938d4635-5470-4017-8c4f-e2705575ba8a\") " pod="kube-system/cilium-cr4h4" Sep 9 05:11:51.198725 kubelet[2666]: I0909 05:11:51.197782 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/938d4635-5470-4017-8c4f-e2705575ba8a-host-proc-sys-kernel\") pod \"cilium-cr4h4\" (UID: \"938d4635-5470-4017-8c4f-e2705575ba8a\") " pod="kube-system/cilium-cr4h4" Sep 9 05:11:51.198725 kubelet[2666]: I0909 05:11:51.197799 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/938d4635-5470-4017-8c4f-e2705575ba8a-hubble-tls\") pod \"cilium-cr4h4\" (UID: \"938d4635-5470-4017-8c4f-e2705575ba8a\") " pod="kube-system/cilium-cr4h4" Sep 9 05:11:51.198725 kubelet[2666]: I0909 05:11:51.197815 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ac8ff5bd-4f07-410d-b2e6-73bbd2ae097c-kube-proxy\") pod \"kube-proxy-rcqtd\" (UID: \"ac8ff5bd-4f07-410d-b2e6-73bbd2ae097c\") " pod="kube-system/kube-proxy-rcqtd" Sep 9 05:11:51.198725 kubelet[2666]: I0909 05:11:51.197840 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ac8ff5bd-4f07-410d-b2e6-73bbd2ae097c-lib-modules\") pod \"kube-proxy-rcqtd\" (UID: \"ac8ff5bd-4f07-410d-b2e6-73bbd2ae097c\") " pod="kube-system/kube-proxy-rcqtd" Sep 9 05:11:51.199538 kubelet[2666]: I0909 05:11:51.197858 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/938d4635-5470-4017-8c4f-e2705575ba8a-bpf-maps\") pod \"cilium-cr4h4\" (UID: \"938d4635-5470-4017-8c4f-e2705575ba8a\") " pod="kube-system/cilium-cr4h4" Sep 9 05:11:51.199538 kubelet[2666]: I0909 05:11:51.197936 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/938d4635-5470-4017-8c4f-e2705575ba8a-host-proc-sys-net\") pod \"cilium-cr4h4\" (UID: \"938d4635-5470-4017-8c4f-e2705575ba8a\") " pod="kube-system/cilium-cr4h4" Sep 9 05:11:51.199538 kubelet[2666]: I0909 05:11:51.197955 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/938d4635-5470-4017-8c4f-e2705575ba8a-hostproc\") pod \"cilium-cr4h4\" (UID: \"938d4635-5470-4017-8c4f-e2705575ba8a\") " pod="kube-system/cilium-cr4h4" Sep 9 05:11:51.199538 kubelet[2666]: I0909 05:11:51.197970 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/938d4635-5470-4017-8c4f-e2705575ba8a-etc-cni-netd\") pod \"cilium-cr4h4\" (UID: \"938d4635-5470-4017-8c4f-e2705575ba8a\") " pod="kube-system/cilium-cr4h4" Sep 9 05:11:51.199538 kubelet[2666]: I0909 05:11:51.197984 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/938d4635-5470-4017-8c4f-e2705575ba8a-lib-modules\") pod \"cilium-cr4h4\" (UID: \"938d4635-5470-4017-8c4f-e2705575ba8a\") " pod="kube-system/cilium-cr4h4" Sep 9 05:11:51.199538 kubelet[2666]: I0909 05:11:51.198027 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj97z\" (UniqueName: \"kubernetes.io/projected/938d4635-5470-4017-8c4f-e2705575ba8a-kube-api-access-zj97z\") pod \"cilium-cr4h4\" (UID: \"938d4635-5470-4017-8c4f-e2705575ba8a\") " pod="kube-system/cilium-cr4h4" Sep 9 05:11:51.199791 kubelet[2666]: I0909 05:11:51.198826 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tl4n\" (UniqueName: \"kubernetes.io/projected/ac8ff5bd-4f07-410d-b2e6-73bbd2ae097c-kube-api-access-5tl4n\") pod \"kube-proxy-rcqtd\" (UID: \"ac8ff5bd-4f07-410d-b2e6-73bbd2ae097c\") " pod="kube-system/kube-proxy-rcqtd" Sep 9 05:11:51.199791 kubelet[2666]: I0909 05:11:51.198952 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/938d4635-5470-4017-8c4f-e2705575ba8a-cilium-run\") pod \"cilium-cr4h4\" (UID: \"938d4635-5470-4017-8c4f-e2705575ba8a\") " pod="kube-system/cilium-cr4h4" Sep 9 05:11:51.199791 kubelet[2666]: I0909 05:11:51.198983 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/938d4635-5470-4017-8c4f-e2705575ba8a-cilium-cgroup\") pod \"cilium-cr4h4\" (UID: \"938d4635-5470-4017-8c4f-e2705575ba8a\") " pod="kube-system/cilium-cr4h4" Sep 9 05:11:51.199791 kubelet[2666]: I0909 05:11:51.198999 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/938d4635-5470-4017-8c4f-e2705575ba8a-xtables-lock\") pod \"cilium-cr4h4\" (UID: \"938d4635-5470-4017-8c4f-e2705575ba8a\") " pod="kube-system/cilium-cr4h4" Sep 9 05:11:51.199791 kubelet[2666]: I0909 05:11:51.199013 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/938d4635-5470-4017-8c4f-e2705575ba8a-clustermesh-secrets\") pod \"cilium-cr4h4\" (UID: \"938d4635-5470-4017-8c4f-e2705575ba8a\") " pod="kube-system/cilium-cr4h4" Sep 9 05:11:51.199951 kubelet[2666]: I0909 05:11:51.199028 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/938d4635-5470-4017-8c4f-e2705575ba8a-cilium-config-path\") pod \"cilium-cr4h4\" (UID: \"938d4635-5470-4017-8c4f-e2705575ba8a\") " pod="kube-system/cilium-cr4h4" Sep 9 05:11:51.209394 systemd[1]: Created slice kubepods-besteffort-poda25a10f8_0125_4c69_8369_96332003a4ce.slice - libcontainer container kubepods-besteffort-poda25a10f8_0125_4c69_8369_96332003a4ce.slice. Sep 9 05:11:51.299896 kubelet[2666]: I0909 05:11:51.299845 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a25a10f8-0125-4c69-8369-96332003a4ce-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-pxpg7\" (UID: \"a25a10f8-0125-4c69-8369-96332003a4ce\") " pod="kube-system/cilium-operator-6c4d7847fc-pxpg7" Sep 9 05:11:51.300051 kubelet[2666]: I0909 05:11:51.299939 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnpn7\" (UniqueName: \"kubernetes.io/projected/a25a10f8-0125-4c69-8369-96332003a4ce-kube-api-access-xnpn7\") pod \"cilium-operator-6c4d7847fc-pxpg7\" (UID: \"a25a10f8-0125-4c69-8369-96332003a4ce\") " pod="kube-system/cilium-operator-6c4d7847fc-pxpg7" Sep 9 05:11:51.439090 containerd[1507]: time="2025-09-09T05:11:51.438984159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rcqtd,Uid:ac8ff5bd-4f07-410d-b2e6-73bbd2ae097c,Namespace:kube-system,Attempt:0,}" Sep 9 05:11:51.446706 containerd[1507]: time="2025-09-09T05:11:51.446669034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cr4h4,Uid:938d4635-5470-4017-8c4f-e2705575ba8a,Namespace:kube-system,Attempt:0,}" Sep 9 05:11:51.458767 containerd[1507]: time="2025-09-09T05:11:51.458692006Z" level=info msg="connecting to shim 94c87b4e2abcd349acb75714035ca63df26a555ff445e1d5b6618fbc2b11f1df" address="unix:///run/containerd/s/b8b1066064a6204085a265d922bd523a2b01100499dde39a9b78e0549a9e833a" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:11:51.465333 containerd[1507]: time="2025-09-09T05:11:51.465299367Z" level=info msg="connecting to shim a109cae57815729a3d5373bff52cd5eacecf29d5e547fe35529dfc754fd16968" address="unix:///run/containerd/s/abde1140e99b581e7efc08de5507f10ab23dbce4f6ad76d719274773da71df66" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:11:51.481855 systemd[1]: Started cri-containerd-94c87b4e2abcd349acb75714035ca63df26a555ff445e1d5b6618fbc2b11f1df.scope - libcontainer container 94c87b4e2abcd349acb75714035ca63df26a555ff445e1d5b6618fbc2b11f1df. Sep 9 05:11:51.484596 systemd[1]: Started cri-containerd-a109cae57815729a3d5373bff52cd5eacecf29d5e547fe35529dfc754fd16968.scope - libcontainer container a109cae57815729a3d5373bff52cd5eacecf29d5e547fe35529dfc754fd16968. Sep 9 05:11:51.510752 containerd[1507]: time="2025-09-09T05:11:51.509558135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rcqtd,Uid:ac8ff5bd-4f07-410d-b2e6-73bbd2ae097c,Namespace:kube-system,Attempt:0,} returns sandbox id \"94c87b4e2abcd349acb75714035ca63df26a555ff445e1d5b6618fbc2b11f1df\"" Sep 9 05:11:51.512558 containerd[1507]: time="2025-09-09T05:11:51.512521050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cr4h4,Uid:938d4635-5470-4017-8c4f-e2705575ba8a,Namespace:kube-system,Attempt:0,} returns sandbox id \"a109cae57815729a3d5373bff52cd5eacecf29d5e547fe35529dfc754fd16968\"" Sep 9 05:11:51.514747 containerd[1507]: time="2025-09-09T05:11:51.514662360Z" level=info msg="CreateContainer within sandbox \"94c87b4e2abcd349acb75714035ca63df26a555ff445e1d5b6618fbc2b11f1df\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 05:11:51.515007 containerd[1507]: time="2025-09-09T05:11:51.514718277Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 9 05:11:51.515154 containerd[1507]: time="2025-09-09T05:11:51.515127260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-pxpg7,Uid:a25a10f8-0125-4c69-8369-96332003a4ce,Namespace:kube-system,Attempt:0,}" Sep 9 05:11:51.526879 containerd[1507]: time="2025-09-09T05:11:51.526834285Z" level=info msg="Container 7a15d2e5783ee9cb987708e41ba086ddd3739f0b2ce97024b9203c20fdaa7131: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:11:51.533208 containerd[1507]: time="2025-09-09T05:11:51.533171577Z" level=info msg="CreateContainer within sandbox \"94c87b4e2abcd349acb75714035ca63df26a555ff445e1d5b6618fbc2b11f1df\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7a15d2e5783ee9cb987708e41ba086ddd3739f0b2ce97024b9203c20fdaa7131\"" Sep 9 05:11:51.533921 containerd[1507]: time="2025-09-09T05:11:51.533871108Z" level=info msg="StartContainer for \"7a15d2e5783ee9cb987708e41ba086ddd3739f0b2ce97024b9203c20fdaa7131\"" Sep 9 05:11:51.536950 containerd[1507]: time="2025-09-09T05:11:51.536780785Z" level=info msg="connecting to shim 7a15d2e5783ee9cb987708e41ba086ddd3739f0b2ce97024b9203c20fdaa7131" address="unix:///run/containerd/s/b8b1066064a6204085a265d922bd523a2b01100499dde39a9b78e0549a9e833a" protocol=ttrpc version=3 Sep 9 05:11:51.536950 containerd[1507]: time="2025-09-09T05:11:51.536807183Z" level=info msg="connecting to shim 489ba88204e96e9ec7a0ba3d9da4af4f00cda8a202b8fca162aaebf4f7b10d3f" address="unix:///run/containerd/s/479c3ecc4e5c7a56229bf84e00ea1d1a74b56e5a1fd2e19062645603d9c7832c" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:11:51.555869 systemd[1]: Started cri-containerd-7a15d2e5783ee9cb987708e41ba086ddd3739f0b2ce97024b9203c20fdaa7131.scope - libcontainer container 7a15d2e5783ee9cb987708e41ba086ddd3739f0b2ce97024b9203c20fdaa7131. Sep 9 05:11:51.558995 systemd[1]: Started cri-containerd-489ba88204e96e9ec7a0ba3d9da4af4f00cda8a202b8fca162aaebf4f7b10d3f.scope - libcontainer container 489ba88204e96e9ec7a0ba3d9da4af4f00cda8a202b8fca162aaebf4f7b10d3f. Sep 9 05:11:51.594909 containerd[1507]: time="2025-09-09T05:11:51.594874369Z" level=info msg="StartContainer for \"7a15d2e5783ee9cb987708e41ba086ddd3739f0b2ce97024b9203c20fdaa7131\" returns successfully" Sep 9 05:11:51.595626 containerd[1507]: time="2025-09-09T05:11:51.595602858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-pxpg7,Uid:a25a10f8-0125-4c69-8369-96332003a4ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"489ba88204e96e9ec7a0ba3d9da4af4f00cda8a202b8fca162aaebf4f7b10d3f\"" Sep 9 05:11:52.208189 kubelet[2666]: I0909 05:11:52.208134 2666 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rcqtd" podStartSLOduration=1.208121122 podStartE2EDuration="1.208121122s" podCreationTimestamp="2025-09-09 05:11:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:11:52.207520986 +0000 UTC m=+7.126330706" watchObservedRunningTime="2025-09-09 05:11:52.208121122 +0000 UTC m=+7.126930802" Sep 9 05:12:01.343595 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4067786335.mount: Deactivated successfully. Sep 9 05:12:01.561606 update_engine[1490]: I20250909 05:12:01.561074 1490 update_attempter.cc:509] Updating boot flags... Sep 9 05:12:02.789603 containerd[1507]: time="2025-09-09T05:12:02.789546731Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:12:02.790514 containerd[1507]: time="2025-09-09T05:12:02.790275274Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 9 05:12:02.791483 containerd[1507]: time="2025-09-09T05:12:02.791452846Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:12:02.793483 containerd[1507]: time="2025-09-09T05:12:02.793104887Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 11.2781389s" Sep 9 05:12:02.793483 containerd[1507]: time="2025-09-09T05:12:02.793156406Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 9 05:12:02.799099 containerd[1507]: time="2025-09-09T05:12:02.799057868Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 9 05:12:02.808317 containerd[1507]: time="2025-09-09T05:12:02.808266091Z" level=info msg="CreateContainer within sandbox \"a109cae57815729a3d5373bff52cd5eacecf29d5e547fe35529dfc754fd16968\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 05:12:02.818512 containerd[1507]: time="2025-09-09T05:12:02.816398221Z" level=info msg="Container 95a11d1f1195bc4cad8705bb35963756610d9ef3e5984444e4d710475634f567: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:12:02.820165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount867644602.mount: Deactivated successfully. Sep 9 05:12:02.823464 containerd[1507]: time="2025-09-09T05:12:02.823401816Z" level=info msg="CreateContainer within sandbox \"a109cae57815729a3d5373bff52cd5eacecf29d5e547fe35529dfc754fd16968\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"95a11d1f1195bc4cad8705bb35963756610d9ef3e5984444e4d710475634f567\"" Sep 9 05:12:02.824024 containerd[1507]: time="2025-09-09T05:12:02.824001122Z" level=info msg="StartContainer for \"95a11d1f1195bc4cad8705bb35963756610d9ef3e5984444e4d710475634f567\"" Sep 9 05:12:02.825258 containerd[1507]: time="2025-09-09T05:12:02.825223334Z" level=info msg="connecting to shim 95a11d1f1195bc4cad8705bb35963756610d9ef3e5984444e4d710475634f567" address="unix:///run/containerd/s/abde1140e99b581e7efc08de5507f10ab23dbce4f6ad76d719274773da71df66" protocol=ttrpc version=3 Sep 9 05:12:02.877901 systemd[1]: Started cri-containerd-95a11d1f1195bc4cad8705bb35963756610d9ef3e5984444e4d710475634f567.scope - libcontainer container 95a11d1f1195bc4cad8705bb35963756610d9ef3e5984444e4d710475634f567. Sep 9 05:12:02.906995 containerd[1507]: time="2025-09-09T05:12:02.906920137Z" level=info msg="StartContainer for \"95a11d1f1195bc4cad8705bb35963756610d9ef3e5984444e4d710475634f567\" returns successfully" Sep 9 05:12:02.920526 systemd[1]: cri-containerd-95a11d1f1195bc4cad8705bb35963756610d9ef3e5984444e4d710475634f567.scope: Deactivated successfully. Sep 9 05:12:02.938984 containerd[1507]: time="2025-09-09T05:12:02.938912346Z" level=info msg="received exit event container_id:\"95a11d1f1195bc4cad8705bb35963756610d9ef3e5984444e4d710475634f567\" id:\"95a11d1f1195bc4cad8705bb35963756610d9ef3e5984444e4d710475634f567\" pid:3117 exited_at:{seconds:1757394722 nanos:932858048}" Sep 9 05:12:02.939321 containerd[1507]: time="2025-09-09T05:12:02.939234779Z" level=info msg="TaskExit event in podsandbox handler container_id:\"95a11d1f1195bc4cad8705bb35963756610d9ef3e5984444e4d710475634f567\" id:\"95a11d1f1195bc4cad8705bb35963756610d9ef3e5984444e4d710475634f567\" pid:3117 exited_at:{seconds:1757394722 nanos:932858048}" Sep 9 05:12:02.964120 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-95a11d1f1195bc4cad8705bb35963756610d9ef3e5984444e4d710475634f567-rootfs.mount: Deactivated successfully. Sep 9 05:12:03.237539 containerd[1507]: time="2025-09-09T05:12:03.237411131Z" level=info msg="CreateContainer within sandbox \"a109cae57815729a3d5373bff52cd5eacecf29d5e547fe35529dfc754fd16968\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 05:12:03.246465 containerd[1507]: time="2025-09-09T05:12:03.246407010Z" level=info msg="Container ae3c7582cbab312c74bc7e4158c062dc009035c6879b23d7903823de1e1568dc: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:12:03.259189 containerd[1507]: time="2025-09-09T05:12:03.259130926Z" level=info msg="CreateContainer within sandbox \"a109cae57815729a3d5373bff52cd5eacecf29d5e547fe35529dfc754fd16968\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ae3c7582cbab312c74bc7e4158c062dc009035c6879b23d7903823de1e1568dc\"" Sep 9 05:12:03.260859 containerd[1507]: time="2025-09-09T05:12:03.260823848Z" level=info msg="StartContainer for \"ae3c7582cbab312c74bc7e4158c062dc009035c6879b23d7903823de1e1568dc\"" Sep 9 05:12:03.261904 containerd[1507]: time="2025-09-09T05:12:03.261870305Z" level=info msg="connecting to shim ae3c7582cbab312c74bc7e4158c062dc009035c6879b23d7903823de1e1568dc" address="unix:///run/containerd/s/abde1140e99b581e7efc08de5507f10ab23dbce4f6ad76d719274773da71df66" protocol=ttrpc version=3 Sep 9 05:12:03.282949 systemd[1]: Started cri-containerd-ae3c7582cbab312c74bc7e4158c062dc009035c6879b23d7903823de1e1568dc.scope - libcontainer container ae3c7582cbab312c74bc7e4158c062dc009035c6879b23d7903823de1e1568dc. Sep 9 05:12:03.334250 containerd[1507]: time="2025-09-09T05:12:03.334203610Z" level=info msg="StartContainer for \"ae3c7582cbab312c74bc7e4158c062dc009035c6879b23d7903823de1e1568dc\" returns successfully" Sep 9 05:12:03.346777 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 05:12:03.346993 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 05:12:03.347474 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 9 05:12:03.348939 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 05:12:03.351452 systemd[1]: cri-containerd-ae3c7582cbab312c74bc7e4158c062dc009035c6879b23d7903823de1e1568dc.scope: Deactivated successfully. Sep 9 05:12:03.352819 systemd[1]: cri-containerd-ae3c7582cbab312c74bc7e4158c062dc009035c6879b23d7903823de1e1568dc.scope: Consumed 31ms CPU time, 7.1M memory peak, 6.3M read from disk, 4K written to disk. Sep 9 05:12:03.364564 containerd[1507]: time="2025-09-09T05:12:03.364529853Z" level=info msg="received exit event container_id:\"ae3c7582cbab312c74bc7e4158c062dc009035c6879b23d7903823de1e1568dc\" id:\"ae3c7582cbab312c74bc7e4158c062dc009035c6879b23d7903823de1e1568dc\" pid:3163 exited_at:{seconds:1757394723 nanos:364279698}" Sep 9 05:12:03.364671 containerd[1507]: time="2025-09-09T05:12:03.364611931Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ae3c7582cbab312c74bc7e4158c062dc009035c6879b23d7903823de1e1568dc\" id:\"ae3c7582cbab312c74bc7e4158c062dc009035c6879b23d7903823de1e1568dc\" pid:3163 exited_at:{seconds:1757394723 nanos:364279698}" Sep 9 05:12:03.375777 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 05:12:03.970924 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount681683201.mount: Deactivated successfully. Sep 9 05:12:04.247726 containerd[1507]: time="2025-09-09T05:12:04.247562762Z" level=info msg="CreateContainer within sandbox \"a109cae57815729a3d5373bff52cd5eacecf29d5e547fe35529dfc754fd16968\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 05:12:04.260717 containerd[1507]: time="2025-09-09T05:12:04.259109476Z" level=info msg="Container 69554c2f91cc5041b13244407ce376d1bbd6228c2762420f3b6791e085a2c193: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:12:04.275995 containerd[1507]: time="2025-09-09T05:12:04.275948838Z" level=info msg="CreateContainer within sandbox \"a109cae57815729a3d5373bff52cd5eacecf29d5e547fe35529dfc754fd16968\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"69554c2f91cc5041b13244407ce376d1bbd6228c2762420f3b6791e085a2c193\"" Sep 9 05:12:04.277126 containerd[1507]: time="2025-09-09T05:12:04.277087414Z" level=info msg="StartContainer for \"69554c2f91cc5041b13244407ce376d1bbd6228c2762420f3b6791e085a2c193\"" Sep 9 05:12:04.281133 containerd[1507]: time="2025-09-09T05:12:04.281029090Z" level=info msg="connecting to shim 69554c2f91cc5041b13244407ce376d1bbd6228c2762420f3b6791e085a2c193" address="unix:///run/containerd/s/abde1140e99b581e7efc08de5507f10ab23dbce4f6ad76d719274773da71df66" protocol=ttrpc version=3 Sep 9 05:12:04.305923 systemd[1]: Started cri-containerd-69554c2f91cc5041b13244407ce376d1bbd6228c2762420f3b6791e085a2c193.scope - libcontainer container 69554c2f91cc5041b13244407ce376d1bbd6228c2762420f3b6791e085a2c193. Sep 9 05:12:04.311718 containerd[1507]: time="2025-09-09T05:12:04.311516042Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:12:04.311989 containerd[1507]: time="2025-09-09T05:12:04.311952633Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 9 05:12:04.312802 containerd[1507]: time="2025-09-09T05:12:04.312767175Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:12:04.314096 containerd[1507]: time="2025-09-09T05:12:04.314062988Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.514949362s" Sep 9 05:12:04.314207 containerd[1507]: time="2025-09-09T05:12:04.314189745Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 9 05:12:04.319484 containerd[1507]: time="2025-09-09T05:12:04.319443633Z" level=info msg="CreateContainer within sandbox \"489ba88204e96e9ec7a0ba3d9da4af4f00cda8a202b8fca162aaebf4f7b10d3f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 9 05:12:04.336885 containerd[1507]: time="2025-09-09T05:12:04.336835024Z" level=info msg="Container 06e567ebbc3df6b71b07e692f6b7032183efbb0e85e1b970af43532b9facecf9: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:12:04.344260 containerd[1507]: time="2025-09-09T05:12:04.344207747Z" level=info msg="CreateContainer within sandbox \"489ba88204e96e9ec7a0ba3d9da4af4f00cda8a202b8fca162aaebf4f7b10d3f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"06e567ebbc3df6b71b07e692f6b7032183efbb0e85e1b970af43532b9facecf9\"" Sep 9 05:12:04.345651 containerd[1507]: time="2025-09-09T05:12:04.345622557Z" level=info msg="StartContainer for \"06e567ebbc3df6b71b07e692f6b7032183efbb0e85e1b970af43532b9facecf9\"" Sep 9 05:12:04.346842 systemd[1]: cri-containerd-69554c2f91cc5041b13244407ce376d1bbd6228c2762420f3b6791e085a2c193.scope: Deactivated successfully. Sep 9 05:12:04.347080 containerd[1507]: time="2025-09-09T05:12:04.347034087Z" level=info msg="StartContainer for \"69554c2f91cc5041b13244407ce376d1bbd6228c2762420f3b6791e085a2c193\" returns successfully" Sep 9 05:12:04.348226 containerd[1507]: time="2025-09-09T05:12:04.348182223Z" level=info msg="connecting to shim 06e567ebbc3df6b71b07e692f6b7032183efbb0e85e1b970af43532b9facecf9" address="unix:///run/containerd/s/479c3ecc4e5c7a56229bf84e00ea1d1a74b56e5a1fd2e19062645603d9c7832c" protocol=ttrpc version=3 Sep 9 05:12:04.351679 containerd[1507]: time="2025-09-09T05:12:04.351644029Z" level=info msg="TaskExit event in podsandbox handler container_id:\"69554c2f91cc5041b13244407ce376d1bbd6228c2762420f3b6791e085a2c193\" id:\"69554c2f91cc5041b13244407ce376d1bbd6228c2762420f3b6791e085a2c193\" pid:3226 exited_at:{seconds:1757394724 nanos:351225158}" Sep 9 05:12:04.351925 containerd[1507]: time="2025-09-09T05:12:04.351735747Z" level=info msg="received exit event container_id:\"69554c2f91cc5041b13244407ce376d1bbd6228c2762420f3b6791e085a2c193\" id:\"69554c2f91cc5041b13244407ce376d1bbd6228c2762420f3b6791e085a2c193\" pid:3226 exited_at:{seconds:1757394724 nanos:351225158}" Sep 9 05:12:04.379899 systemd[1]: Started cri-containerd-06e567ebbc3df6b71b07e692f6b7032183efbb0e85e1b970af43532b9facecf9.scope - libcontainer container 06e567ebbc3df6b71b07e692f6b7032183efbb0e85e1b970af43532b9facecf9. Sep 9 05:12:04.427697 containerd[1507]: time="2025-09-09T05:12:04.427585735Z" level=info msg="StartContainer for \"06e567ebbc3df6b71b07e692f6b7032183efbb0e85e1b970af43532b9facecf9\" returns successfully" Sep 9 05:12:05.264258 containerd[1507]: time="2025-09-09T05:12:05.264195652Z" level=info msg="CreateContainer within sandbox \"a109cae57815729a3d5373bff52cd5eacecf29d5e547fe35529dfc754fd16968\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 05:12:05.266394 kubelet[2666]: I0909 05:12:05.266323 2666 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-pxpg7" podStartSLOduration=1.548084089 podStartE2EDuration="14.26628433s" podCreationTimestamp="2025-09-09 05:11:51 +0000 UTC" firstStartedPulling="2025-09-09 05:11:51.596794647 +0000 UTC m=+6.515604287" lastFinishedPulling="2025-09-09 05:12:04.314994848 +0000 UTC m=+19.233804528" observedRunningTime="2025-09-09 05:12:05.266051734 +0000 UTC m=+20.184861414" watchObservedRunningTime="2025-09-09 05:12:05.26628433 +0000 UTC m=+20.185094010" Sep 9 05:12:05.282616 containerd[1507]: time="2025-09-09T05:12:05.282563960Z" level=info msg="Container da0af6a69772586c17360f9dd37df2384e04045a32705d0b10c390ad77b5b499: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:12:05.298669 containerd[1507]: time="2025-09-09T05:12:05.298622475Z" level=info msg="CreateContainer within sandbox \"a109cae57815729a3d5373bff52cd5eacecf29d5e547fe35529dfc754fd16968\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"da0af6a69772586c17360f9dd37df2384e04045a32705d0b10c390ad77b5b499\"" Sep 9 05:12:05.299510 containerd[1507]: time="2025-09-09T05:12:05.299357940Z" level=info msg="StartContainer for \"da0af6a69772586c17360f9dd37df2384e04045a32705d0b10c390ad77b5b499\"" Sep 9 05:12:05.300743 containerd[1507]: time="2025-09-09T05:12:05.300246122Z" level=info msg="connecting to shim da0af6a69772586c17360f9dd37df2384e04045a32705d0b10c390ad77b5b499" address="unix:///run/containerd/s/abde1140e99b581e7efc08de5507f10ab23dbce4f6ad76d719274773da71df66" protocol=ttrpc version=3 Sep 9 05:12:05.332958 systemd[1]: Started cri-containerd-da0af6a69772586c17360f9dd37df2384e04045a32705d0b10c390ad77b5b499.scope - libcontainer container da0af6a69772586c17360f9dd37df2384e04045a32705d0b10c390ad77b5b499. Sep 9 05:12:05.359309 systemd[1]: cri-containerd-da0af6a69772586c17360f9dd37df2384e04045a32705d0b10c390ad77b5b499.scope: Deactivated successfully. Sep 9 05:12:05.360319 containerd[1507]: time="2025-09-09T05:12:05.360278585Z" level=info msg="TaskExit event in podsandbox handler container_id:\"da0af6a69772586c17360f9dd37df2384e04045a32705d0b10c390ad77b5b499\" id:\"da0af6a69772586c17360f9dd37df2384e04045a32705d0b10c390ad77b5b499\" pid:3297 exited_at:{seconds:1757394725 nanos:359594199}" Sep 9 05:12:05.361964 containerd[1507]: time="2025-09-09T05:12:05.361924912Z" level=info msg="received exit event container_id:\"da0af6a69772586c17360f9dd37df2384e04045a32705d0b10c390ad77b5b499\" id:\"da0af6a69772586c17360f9dd37df2384e04045a32705d0b10c390ad77b5b499\" pid:3297 exited_at:{seconds:1757394725 nanos:359594199}" Sep 9 05:12:05.368938 containerd[1507]: time="2025-09-09T05:12:05.368885171Z" level=info msg="StartContainer for \"da0af6a69772586c17360f9dd37df2384e04045a32705d0b10c390ad77b5b499\" returns successfully" Sep 9 05:12:05.376680 containerd[1507]: time="2025-09-09T05:12:05.376403259Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod938d4635_5470_4017_8c4f_e2705575ba8a.slice/cri-containerd-da0af6a69772586c17360f9dd37df2384e04045a32705d0b10c390ad77b5b499.scope/memory.events\": no such file or directory" Sep 9 05:12:05.380878 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da0af6a69772586c17360f9dd37df2384e04045a32705d0b10c390ad77b5b499-rootfs.mount: Deactivated successfully. Sep 9 05:12:06.263343 containerd[1507]: time="2025-09-09T05:12:06.263281897Z" level=info msg="CreateContainer within sandbox \"a109cae57815729a3d5373bff52cd5eacecf29d5e547fe35529dfc754fd16968\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 05:12:06.285056 containerd[1507]: time="2025-09-09T05:12:06.284296171Z" level=info msg="Container 3593d9db945d03e3b7452f37497800bef67a03013eafe7b79e9b7b7ddc4cba64: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:12:06.290314 containerd[1507]: time="2025-09-09T05:12:06.290279815Z" level=info msg="CreateContainer within sandbox \"a109cae57815729a3d5373bff52cd5eacecf29d5e547fe35529dfc754fd16968\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3593d9db945d03e3b7452f37497800bef67a03013eafe7b79e9b7b7ddc4cba64\"" Sep 9 05:12:06.290935 containerd[1507]: time="2025-09-09T05:12:06.290908243Z" level=info msg="StartContainer for \"3593d9db945d03e3b7452f37497800bef67a03013eafe7b79e9b7b7ddc4cba64\"" Sep 9 05:12:06.291827 containerd[1507]: time="2025-09-09T05:12:06.291790946Z" level=info msg="connecting to shim 3593d9db945d03e3b7452f37497800bef67a03013eafe7b79e9b7b7ddc4cba64" address="unix:///run/containerd/s/abde1140e99b581e7efc08de5507f10ab23dbce4f6ad76d719274773da71df66" protocol=ttrpc version=3 Sep 9 05:12:06.310841 systemd[1]: Started cri-containerd-3593d9db945d03e3b7452f37497800bef67a03013eafe7b79e9b7b7ddc4cba64.scope - libcontainer container 3593d9db945d03e3b7452f37497800bef67a03013eafe7b79e9b7b7ddc4cba64. Sep 9 05:12:06.346173 containerd[1507]: time="2025-09-09T05:12:06.346140496Z" level=info msg="StartContainer for \"3593d9db945d03e3b7452f37497800bef67a03013eafe7b79e9b7b7ddc4cba64\" returns successfully" Sep 9 05:12:06.404157 containerd[1507]: time="2025-09-09T05:12:06.404118536Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3593d9db945d03e3b7452f37497800bef67a03013eafe7b79e9b7b7ddc4cba64\" id:\"0b39bbe2117cedf0acd286e17d34d6f77049116bb4b600cbf421d32d359c2ebb\" pid:3368 exited_at:{seconds:1757394726 nanos:403049956}" Sep 9 05:12:06.470401 kubelet[2666]: I0909 05:12:06.470346 2666 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 9 05:12:06.514334 systemd[1]: Created slice kubepods-burstable-pod72d613ca_f1d7_4d5d_9f31_d021077b2aad.slice - libcontainer container kubepods-burstable-pod72d613ca_f1d7_4d5d_9f31_d021077b2aad.slice. Sep 9 05:12:06.521598 systemd[1]: Created slice kubepods-burstable-pod7a692af8_a1d5_4220_bbed_7b46f4f793c9.slice - libcontainer container kubepods-burstable-pod7a692af8_a1d5_4220_bbed_7b46f4f793c9.slice. Sep 9 05:12:06.608130 kubelet[2666]: I0909 05:12:06.608086 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a692af8-a1d5-4220-bbed-7b46f4f793c9-config-volume\") pod \"coredns-674b8bbfcf-d9cmp\" (UID: \"7a692af8-a1d5-4220-bbed-7b46f4f793c9\") " pod="kube-system/coredns-674b8bbfcf-d9cmp" Sep 9 05:12:06.608473 kubelet[2666]: I0909 05:12:06.608421 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/72d613ca-f1d7-4d5d-9f31-d021077b2aad-config-volume\") pod \"coredns-674b8bbfcf-b6mgb\" (UID: \"72d613ca-f1d7-4d5d-9f31-d021077b2aad\") " pod="kube-system/coredns-674b8bbfcf-b6mgb" Sep 9 05:12:06.608776 kubelet[2666]: I0909 05:12:06.608627 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhqhv\" (UniqueName: \"kubernetes.io/projected/7a692af8-a1d5-4220-bbed-7b46f4f793c9-kube-api-access-lhqhv\") pod \"coredns-674b8bbfcf-d9cmp\" (UID: \"7a692af8-a1d5-4220-bbed-7b46f4f793c9\") " pod="kube-system/coredns-674b8bbfcf-d9cmp" Sep 9 05:12:06.608776 kubelet[2666]: I0909 05:12:06.608673 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fn5t\" (UniqueName: \"kubernetes.io/projected/72d613ca-f1d7-4d5d-9f31-d021077b2aad-kube-api-access-9fn5t\") pod \"coredns-674b8bbfcf-b6mgb\" (UID: \"72d613ca-f1d7-4d5d-9f31-d021077b2aad\") " pod="kube-system/coredns-674b8bbfcf-b6mgb" Sep 9 05:12:06.821146 containerd[1507]: time="2025-09-09T05:12:06.821049440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-b6mgb,Uid:72d613ca-f1d7-4d5d-9f31-d021077b2aad,Namespace:kube-system,Attempt:0,}" Sep 9 05:12:06.825404 containerd[1507]: time="2025-09-09T05:12:06.825347837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-d9cmp,Uid:7a692af8-a1d5-4220-bbed-7b46f4f793c9,Namespace:kube-system,Attempt:0,}" Sep 9 05:12:07.282279 kubelet[2666]: I0909 05:12:07.282103 2666 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cr4h4" podStartSLOduration=4.997134218 podStartE2EDuration="16.282087179s" podCreationTimestamp="2025-09-09 05:11:51 +0000 UTC" firstStartedPulling="2025-09-09 05:11:51.51393767 +0000 UTC m=+6.432747350" lastFinishedPulling="2025-09-09 05:12:02.798890631 +0000 UTC m=+17.717700311" observedRunningTime="2025-09-09 05:12:07.279271391 +0000 UTC m=+22.198081071" watchObservedRunningTime="2025-09-09 05:12:07.282087179 +0000 UTC m=+22.200896819" Sep 9 05:12:08.410484 systemd-networkd[1454]: cilium_host: Link UP Sep 9 05:12:08.410716 systemd-networkd[1454]: cilium_net: Link UP Sep 9 05:12:08.410859 systemd-networkd[1454]: cilium_host: Gained carrier Sep 9 05:12:08.410973 systemd-networkd[1454]: cilium_net: Gained carrier Sep 9 05:12:08.498302 systemd-networkd[1454]: cilium_vxlan: Link UP Sep 9 05:12:08.498313 systemd-networkd[1454]: cilium_vxlan: Gained carrier Sep 9 05:12:08.751738 kernel: NET: Registered PF_ALG protocol family Sep 9 05:12:08.968849 systemd-networkd[1454]: cilium_net: Gained IPv6LL Sep 9 05:12:09.225904 systemd-networkd[1454]: cilium_host: Gained IPv6LL Sep 9 05:12:09.329826 systemd-networkd[1454]: lxc_health: Link UP Sep 9 05:12:09.330114 systemd-networkd[1454]: lxc_health: Gained carrier Sep 9 05:12:09.862858 systemd-networkd[1454]: lxc8bc3b4901b8b: Link UP Sep 9 05:12:09.865140 kernel: eth0: renamed from tmpb3f19 Sep 9 05:12:09.865757 systemd-networkd[1454]: lxc8bc3b4901b8b: Gained carrier Sep 9 05:12:09.866551 systemd-networkd[1454]: lxc03e0c5276f30: Link UP Sep 9 05:12:09.879799 kernel: eth0: renamed from tmp75df1 Sep 9 05:12:09.886103 systemd-networkd[1454]: lxc03e0c5276f30: Gained carrier Sep 9 05:12:10.248898 systemd-networkd[1454]: cilium_vxlan: Gained IPv6LL Sep 9 05:12:11.144926 systemd-networkd[1454]: lxc8bc3b4901b8b: Gained IPv6LL Sep 9 05:12:11.337855 systemd-networkd[1454]: lxc_health: Gained IPv6LL Sep 9 05:12:11.592884 systemd-networkd[1454]: lxc03e0c5276f30: Gained IPv6LL Sep 9 05:12:13.373478 containerd[1507]: time="2025-09-09T05:12:13.373441502Z" level=info msg="connecting to shim 75df14f19194432e9e7295613d9353ca0f8bc14c9a59d798c88b28e7f34febd9" address="unix:///run/containerd/s/9f78ca86b51e40ece81a062b8a3485e197f63eb0c1c4470139cd4bd1611606d4" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:12:13.386325 containerd[1507]: time="2025-09-09T05:12:13.386192161Z" level=info msg="connecting to shim b3f19d88bded53258956776f90714f88dbd62334aed96956df80796db5fbef0c" address="unix:///run/containerd/s/df7ee90f686f2e140a5c5b253250033a665cd798f0cc81ee5c1192705cac1226" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:12:13.418862 systemd[1]: Started cri-containerd-75df14f19194432e9e7295613d9353ca0f8bc14c9a59d798c88b28e7f34febd9.scope - libcontainer container 75df14f19194432e9e7295613d9353ca0f8bc14c9a59d798c88b28e7f34febd9. Sep 9 05:12:13.421815 systemd[1]: Started cri-containerd-b3f19d88bded53258956776f90714f88dbd62334aed96956df80796db5fbef0c.scope - libcontainer container b3f19d88bded53258956776f90714f88dbd62334aed96956df80796db5fbef0c. Sep 9 05:12:13.431342 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 05:12:13.432450 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 05:12:13.458202 containerd[1507]: time="2025-09-09T05:12:13.458154738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-b6mgb,Uid:72d613ca-f1d7-4d5d-9f31-d021077b2aad,Namespace:kube-system,Attempt:0,} returns sandbox id \"75df14f19194432e9e7295613d9353ca0f8bc14c9a59d798c88b28e7f34febd9\"" Sep 9 05:12:13.466322 containerd[1507]: time="2025-09-09T05:12:13.466294343Z" level=info msg="CreateContainer within sandbox \"75df14f19194432e9e7295613d9353ca0f8bc14c9a59d798c88b28e7f34febd9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 05:12:13.476425 containerd[1507]: time="2025-09-09T05:12:13.475874047Z" level=info msg="Container 0b7b6e2bfff825c72dcc931a2e273f5316ffbc921e40c72d9f8ce50d5473db49: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:12:13.480758 containerd[1507]: time="2025-09-09T05:12:13.480733338Z" level=info msg="CreateContainer within sandbox \"75df14f19194432e9e7295613d9353ca0f8bc14c9a59d798c88b28e7f34febd9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0b7b6e2bfff825c72dcc931a2e273f5316ffbc921e40c72d9f8ce50d5473db49\"" Sep 9 05:12:13.481445 containerd[1507]: time="2025-09-09T05:12:13.481425968Z" level=info msg="StartContainer for \"0b7b6e2bfff825c72dcc931a2e273f5316ffbc921e40c72d9f8ce50d5473db49\"" Sep 9 05:12:13.482520 containerd[1507]: time="2025-09-09T05:12:13.482488273Z" level=info msg="connecting to shim 0b7b6e2bfff825c72dcc931a2e273f5316ffbc921e40c72d9f8ce50d5473db49" address="unix:///run/containerd/s/9f78ca86b51e40ece81a062b8a3485e197f63eb0c1c4470139cd4bd1611606d4" protocol=ttrpc version=3 Sep 9 05:12:13.487796 containerd[1507]: time="2025-09-09T05:12:13.487764318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-d9cmp,Uid:7a692af8-a1d5-4220-bbed-7b46f4f793c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3f19d88bded53258956776f90714f88dbd62334aed96956df80796db5fbef0c\"" Sep 9 05:12:13.492028 containerd[1507]: time="2025-09-09T05:12:13.492000377Z" level=info msg="CreateContainer within sandbox \"b3f19d88bded53258956776f90714f88dbd62334aed96956df80796db5fbef0c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 05:12:13.502280 containerd[1507]: time="2025-09-09T05:12:13.502249072Z" level=info msg="Container 62d31c7a4c7d07848eb8183fee1ec598ff5f2e6540bcbc2588a0870507869c9b: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:12:13.506587 containerd[1507]: time="2025-09-09T05:12:13.506554411Z" level=info msg="CreateContainer within sandbox \"b3f19d88bded53258956776f90714f88dbd62334aed96956df80796db5fbef0c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"62d31c7a4c7d07848eb8183fee1ec598ff5f2e6540bcbc2588a0870507869c9b\"" Sep 9 05:12:13.507084 containerd[1507]: time="2025-09-09T05:12:13.507061523Z" level=info msg="StartContainer for \"62d31c7a4c7d07848eb8183fee1ec598ff5f2e6540bcbc2588a0870507869c9b\"" Sep 9 05:12:13.508857 containerd[1507]: time="2025-09-09T05:12:13.508821578Z" level=info msg="connecting to shim 62d31c7a4c7d07848eb8183fee1ec598ff5f2e6540bcbc2588a0870507869c9b" address="unix:///run/containerd/s/df7ee90f686f2e140a5c5b253250033a665cd798f0cc81ee5c1192705cac1226" protocol=ttrpc version=3 Sep 9 05:12:13.509935 systemd[1]: Started cri-containerd-0b7b6e2bfff825c72dcc931a2e273f5316ffbc921e40c72d9f8ce50d5473db49.scope - libcontainer container 0b7b6e2bfff825c72dcc931a2e273f5316ffbc921e40c72d9f8ce50d5473db49. Sep 9 05:12:13.541879 systemd[1]: Started cri-containerd-62d31c7a4c7d07848eb8183fee1ec598ff5f2e6540bcbc2588a0870507869c9b.scope - libcontainer container 62d31c7a4c7d07848eb8183fee1ec598ff5f2e6540bcbc2588a0870507869c9b. Sep 9 05:12:13.566036 containerd[1507]: time="2025-09-09T05:12:13.565896127Z" level=info msg="StartContainer for \"0b7b6e2bfff825c72dcc931a2e273f5316ffbc921e40c72d9f8ce50d5473db49\" returns successfully" Sep 9 05:12:13.573556 containerd[1507]: time="2025-09-09T05:12:13.573512219Z" level=info msg="StartContainer for \"62d31c7a4c7d07848eb8183fee1ec598ff5f2e6540bcbc2588a0870507869c9b\" returns successfully" Sep 9 05:12:13.810744 systemd[1]: Started sshd@7-10.0.0.133:22-10.0.0.1:42488.service - OpenSSH per-connection server daemon (10.0.0.1:42488). Sep 9 05:12:13.863344 sshd[4016]: Accepted publickey for core from 10.0.0.1 port 42488 ssh2: RSA SHA256:y2XmME+qZ8Vxpxr1aV4RZrrEEzQsCrVNEgcY8K5ZHGs Sep 9 05:12:13.864454 sshd-session[4016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:12:13.868569 systemd-logind[1484]: New session 8 of user core. Sep 9 05:12:13.884884 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 05:12:14.009111 sshd[4019]: Connection closed by 10.0.0.1 port 42488 Sep 9 05:12:14.009615 sshd-session[4016]: pam_unix(sshd:session): session closed for user core Sep 9 05:12:14.012752 systemd[1]: sshd@7-10.0.0.133:22-10.0.0.1:42488.service: Deactivated successfully. Sep 9 05:12:14.016114 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 05:12:14.016837 systemd-logind[1484]: Session 8 logged out. Waiting for processes to exit. Sep 9 05:12:14.017922 systemd-logind[1484]: Removed session 8. Sep 9 05:12:14.322867 kubelet[2666]: I0909 05:12:14.321676 2666 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-d9cmp" podStartSLOduration=23.321660726 podStartE2EDuration="23.321660726s" podCreationTimestamp="2025-09-09 05:11:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:12:14.320171546 +0000 UTC m=+29.238981226" watchObservedRunningTime="2025-09-09 05:12:14.321660726 +0000 UTC m=+29.240470406" Sep 9 05:12:14.345189 kubelet[2666]: I0909 05:12:14.344788 2666 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-b6mgb" podStartSLOduration=23.34477173 podStartE2EDuration="23.34477173s" podCreationTimestamp="2025-09-09 05:11:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:12:14.344727491 +0000 UTC m=+29.263537171" watchObservedRunningTime="2025-09-09 05:12:14.34477173 +0000 UTC m=+29.263581410" Sep 9 05:12:19.027576 systemd[1]: Started sshd@8-10.0.0.133:22-10.0.0.1:42496.service - OpenSSH per-connection server daemon (10.0.0.1:42496). Sep 9 05:12:19.115840 sshd[4041]: Accepted publickey for core from 10.0.0.1 port 42496 ssh2: RSA SHA256:y2XmME+qZ8Vxpxr1aV4RZrrEEzQsCrVNEgcY8K5ZHGs Sep 9 05:12:19.117526 sshd-session[4041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:12:19.121902 systemd-logind[1484]: New session 9 of user core. Sep 9 05:12:19.134907 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 05:12:19.255664 sshd[4044]: Connection closed by 10.0.0.1 port 42496 Sep 9 05:12:19.256532 sshd-session[4041]: pam_unix(sshd:session): session closed for user core Sep 9 05:12:19.259924 systemd[1]: sshd@8-10.0.0.133:22-10.0.0.1:42496.service: Deactivated successfully. Sep 9 05:12:19.261585 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 05:12:19.262358 systemd-logind[1484]: Session 9 logged out. Waiting for processes to exit. Sep 9 05:12:19.263805 systemd-logind[1484]: Removed session 9. Sep 9 05:12:24.271852 systemd[1]: Started sshd@9-10.0.0.133:22-10.0.0.1:59810.service - OpenSSH per-connection server daemon (10.0.0.1:59810). Sep 9 05:12:24.329901 sshd[4060]: Accepted publickey for core from 10.0.0.1 port 59810 ssh2: RSA SHA256:y2XmME+qZ8Vxpxr1aV4RZrrEEzQsCrVNEgcY8K5ZHGs Sep 9 05:12:24.331132 sshd-session[4060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:12:24.335838 systemd-logind[1484]: New session 10 of user core. Sep 9 05:12:24.342864 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 05:12:24.451577 sshd[4063]: Connection closed by 10.0.0.1 port 59810 Sep 9 05:12:24.452860 sshd-session[4060]: pam_unix(sshd:session): session closed for user core Sep 9 05:12:24.464788 systemd[1]: sshd@9-10.0.0.133:22-10.0.0.1:59810.service: Deactivated successfully. Sep 9 05:12:24.466497 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 05:12:24.467181 systemd-logind[1484]: Session 10 logged out. Waiting for processes to exit. Sep 9 05:12:24.470938 systemd[1]: Started sshd@10-10.0.0.133:22-10.0.0.1:59814.service - OpenSSH per-connection server daemon (10.0.0.1:59814). Sep 9 05:12:24.472024 systemd-logind[1484]: Removed session 10. Sep 9 05:12:24.525762 sshd[4078]: Accepted publickey for core from 10.0.0.1 port 59814 ssh2: RSA SHA256:y2XmME+qZ8Vxpxr1aV4RZrrEEzQsCrVNEgcY8K5ZHGs Sep 9 05:12:24.527010 sshd-session[4078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:12:24.530786 systemd-logind[1484]: New session 11 of user core. Sep 9 05:12:24.540852 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 05:12:24.686422 sshd[4081]: Connection closed by 10.0.0.1 port 59814 Sep 9 05:12:24.685126 sshd-session[4078]: pam_unix(sshd:session): session closed for user core Sep 9 05:12:24.700315 systemd[1]: sshd@10-10.0.0.133:22-10.0.0.1:59814.service: Deactivated successfully. Sep 9 05:12:24.704563 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 05:12:24.706027 systemd-logind[1484]: Session 11 logged out. Waiting for processes to exit. Sep 9 05:12:24.711270 systemd[1]: Started sshd@11-10.0.0.133:22-10.0.0.1:59830.service - OpenSSH per-connection server daemon (10.0.0.1:59830). Sep 9 05:12:24.712059 systemd-logind[1484]: Removed session 11. Sep 9 05:12:24.763419 sshd[4093]: Accepted publickey for core from 10.0.0.1 port 59830 ssh2: RSA SHA256:y2XmME+qZ8Vxpxr1aV4RZrrEEzQsCrVNEgcY8K5ZHGs Sep 9 05:12:24.764642 sshd-session[4093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:12:24.768296 systemd-logind[1484]: New session 12 of user core. Sep 9 05:12:24.778836 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 05:12:24.888781 sshd[4096]: Connection closed by 10.0.0.1 port 59830 Sep 9 05:12:24.889271 sshd-session[4093]: pam_unix(sshd:session): session closed for user core Sep 9 05:12:24.893119 systemd[1]: sshd@11-10.0.0.133:22-10.0.0.1:59830.service: Deactivated successfully. Sep 9 05:12:24.895622 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 05:12:24.897065 systemd-logind[1484]: Session 12 logged out. Waiting for processes to exit. Sep 9 05:12:24.900403 systemd-logind[1484]: Removed session 12. Sep 9 05:12:29.902894 systemd[1]: Started sshd@12-10.0.0.133:22-10.0.0.1:59838.service - OpenSSH per-connection server daemon (10.0.0.1:59838). Sep 9 05:12:29.948695 sshd[4111]: Accepted publickey for core from 10.0.0.1 port 59838 ssh2: RSA SHA256:y2XmME+qZ8Vxpxr1aV4RZrrEEzQsCrVNEgcY8K5ZHGs Sep 9 05:12:29.949930 sshd-session[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:12:29.954163 systemd-logind[1484]: New session 13 of user core. Sep 9 05:12:29.967957 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 05:12:30.079818 sshd[4114]: Connection closed by 10.0.0.1 port 59838 Sep 9 05:12:30.080229 sshd-session[4111]: pam_unix(sshd:session): session closed for user core Sep 9 05:12:30.083503 systemd-logind[1484]: Session 13 logged out. Waiting for processes to exit. Sep 9 05:12:30.083647 systemd[1]: sshd@12-10.0.0.133:22-10.0.0.1:59838.service: Deactivated successfully. Sep 9 05:12:30.085241 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 05:12:30.086546 systemd-logind[1484]: Removed session 13. Sep 9 05:12:35.095840 systemd[1]: Started sshd@13-10.0.0.133:22-10.0.0.1:40766.service - OpenSSH per-connection server daemon (10.0.0.1:40766). Sep 9 05:12:35.165539 sshd[4127]: Accepted publickey for core from 10.0.0.1 port 40766 ssh2: RSA SHA256:y2XmME+qZ8Vxpxr1aV4RZrrEEzQsCrVNEgcY8K5ZHGs Sep 9 05:12:35.166894 sshd-session[4127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:12:35.172139 systemd-logind[1484]: New session 14 of user core. Sep 9 05:12:35.195141 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 05:12:35.327208 sshd[4130]: Connection closed by 10.0.0.1 port 40766 Sep 9 05:12:35.327932 sshd-session[4127]: pam_unix(sshd:session): session closed for user core Sep 9 05:12:35.338407 systemd[1]: sshd@13-10.0.0.133:22-10.0.0.1:40766.service: Deactivated successfully. Sep 9 05:12:35.343102 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 05:12:35.345218 systemd-logind[1484]: Session 14 logged out. Waiting for processes to exit. Sep 9 05:12:35.348099 systemd[1]: Started sshd@14-10.0.0.133:22-10.0.0.1:40772.service - OpenSSH per-connection server daemon (10.0.0.1:40772). Sep 9 05:12:35.349186 systemd-logind[1484]: Removed session 14. Sep 9 05:12:35.402927 sshd[4143]: Accepted publickey for core from 10.0.0.1 port 40772 ssh2: RSA SHA256:y2XmME+qZ8Vxpxr1aV4RZrrEEzQsCrVNEgcY8K5ZHGs Sep 9 05:12:35.404532 sshd-session[4143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:12:35.408970 systemd-logind[1484]: New session 15 of user core. Sep 9 05:12:35.420941 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 05:12:35.606909 sshd[4146]: Connection closed by 10.0.0.1 port 40772 Sep 9 05:12:35.607372 sshd-session[4143]: pam_unix(sshd:session): session closed for user core Sep 9 05:12:35.616094 systemd[1]: sshd@14-10.0.0.133:22-10.0.0.1:40772.service: Deactivated successfully. Sep 9 05:12:35.618008 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 05:12:35.619201 systemd-logind[1484]: Session 15 logged out. Waiting for processes to exit. Sep 9 05:12:35.621892 systemd[1]: Started sshd@15-10.0.0.133:22-10.0.0.1:40782.service - OpenSSH per-connection server daemon (10.0.0.1:40782). Sep 9 05:12:35.623340 systemd-logind[1484]: Removed session 15. Sep 9 05:12:35.680374 sshd[4159]: Accepted publickey for core from 10.0.0.1 port 40782 ssh2: RSA SHA256:y2XmME+qZ8Vxpxr1aV4RZrrEEzQsCrVNEgcY8K5ZHGs Sep 9 05:12:35.681935 sshd-session[4159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:12:35.687410 systemd-logind[1484]: New session 16 of user core. Sep 9 05:12:35.701923 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 05:12:36.314551 sshd[4162]: Connection closed by 10.0.0.1 port 40782 Sep 9 05:12:36.314881 sshd-session[4159]: pam_unix(sshd:session): session closed for user core Sep 9 05:12:36.324422 systemd[1]: sshd@15-10.0.0.133:22-10.0.0.1:40782.service: Deactivated successfully. Sep 9 05:12:36.327054 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 05:12:36.328187 systemd-logind[1484]: Session 16 logged out. Waiting for processes to exit. Sep 9 05:12:36.334323 systemd[1]: Started sshd@16-10.0.0.133:22-10.0.0.1:40794.service - OpenSSH per-connection server daemon (10.0.0.1:40794). Sep 9 05:12:36.335773 systemd-logind[1484]: Removed session 16. Sep 9 05:12:36.396552 sshd[4181]: Accepted publickey for core from 10.0.0.1 port 40794 ssh2: RSA SHA256:y2XmME+qZ8Vxpxr1aV4RZrrEEzQsCrVNEgcY8K5ZHGs Sep 9 05:12:36.397936 sshd-session[4181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:12:36.402201 systemd-logind[1484]: New session 17 of user core. Sep 9 05:12:36.413929 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 05:12:36.643316 sshd[4184]: Connection closed by 10.0.0.1 port 40794 Sep 9 05:12:36.643923 sshd-session[4181]: pam_unix(sshd:session): session closed for user core Sep 9 05:12:36.654903 systemd[1]: sshd@16-10.0.0.133:22-10.0.0.1:40794.service: Deactivated successfully. Sep 9 05:12:36.657056 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 05:12:36.659296 systemd-logind[1484]: Session 17 logged out. Waiting for processes to exit. Sep 9 05:12:36.662266 systemd[1]: Started sshd@17-10.0.0.133:22-10.0.0.1:40800.service - OpenSSH per-connection server daemon (10.0.0.1:40800). Sep 9 05:12:36.663389 systemd-logind[1484]: Removed session 17. Sep 9 05:12:36.734955 sshd[4195]: Accepted publickey for core from 10.0.0.1 port 40800 ssh2: RSA SHA256:y2XmME+qZ8Vxpxr1aV4RZrrEEzQsCrVNEgcY8K5ZHGs Sep 9 05:12:36.736267 sshd-session[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:12:36.740645 systemd-logind[1484]: New session 18 of user core. Sep 9 05:12:36.759938 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 05:12:36.876760 sshd[4198]: Connection closed by 10.0.0.1 port 40800 Sep 9 05:12:36.876202 sshd-session[4195]: pam_unix(sshd:session): session closed for user core Sep 9 05:12:36.880466 systemd[1]: sshd@17-10.0.0.133:22-10.0.0.1:40800.service: Deactivated successfully. Sep 9 05:12:36.882568 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 05:12:36.883739 systemd-logind[1484]: Session 18 logged out. Waiting for processes to exit. Sep 9 05:12:36.885340 systemd-logind[1484]: Removed session 18. Sep 9 05:12:41.891163 systemd[1]: Started sshd@18-10.0.0.133:22-10.0.0.1:47584.service - OpenSSH per-connection server daemon (10.0.0.1:47584). Sep 9 05:12:41.941828 sshd[4213]: Accepted publickey for core from 10.0.0.1 port 47584 ssh2: RSA SHA256:y2XmME+qZ8Vxpxr1aV4RZrrEEzQsCrVNEgcY8K5ZHGs Sep 9 05:12:41.942663 sshd-session[4213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:12:41.946472 systemd-logind[1484]: New session 19 of user core. Sep 9 05:12:41.957904 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 05:12:42.070336 sshd[4216]: Connection closed by 10.0.0.1 port 47584 Sep 9 05:12:42.070176 sshd-session[4213]: pam_unix(sshd:session): session closed for user core Sep 9 05:12:42.073964 systemd[1]: sshd@18-10.0.0.133:22-10.0.0.1:47584.service: Deactivated successfully. Sep 9 05:12:42.075697 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 05:12:42.078467 systemd-logind[1484]: Session 19 logged out. Waiting for processes to exit. Sep 9 05:12:42.079619 systemd-logind[1484]: Removed session 19. Sep 9 05:12:47.081876 systemd[1]: Started sshd@19-10.0.0.133:22-10.0.0.1:47622.service - OpenSSH per-connection server daemon (10.0.0.1:47622). Sep 9 05:12:47.136782 sshd[4233]: Accepted publickey for core from 10.0.0.1 port 47622 ssh2: RSA SHA256:y2XmME+qZ8Vxpxr1aV4RZrrEEzQsCrVNEgcY8K5ZHGs Sep 9 05:12:47.138569 sshd-session[4233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:12:47.142795 systemd-logind[1484]: New session 20 of user core. Sep 9 05:12:47.152139 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 05:12:47.278696 sshd[4236]: Connection closed by 10.0.0.1 port 47622 Sep 9 05:12:47.277541 sshd-session[4233]: pam_unix(sshd:session): session closed for user core Sep 9 05:12:47.283853 systemd[1]: sshd@19-10.0.0.133:22-10.0.0.1:47622.service: Deactivated successfully. Sep 9 05:12:47.286340 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 05:12:47.290407 systemd-logind[1484]: Session 20 logged out. Waiting for processes to exit. Sep 9 05:12:47.291832 systemd-logind[1484]: Removed session 20. Sep 9 05:12:52.307698 systemd[1]: Started sshd@20-10.0.0.133:22-10.0.0.1:34018.service - OpenSSH per-connection server daemon (10.0.0.1:34018). Sep 9 05:12:52.350453 sshd[4253]: Accepted publickey for core from 10.0.0.1 port 34018 ssh2: RSA SHA256:y2XmME+qZ8Vxpxr1aV4RZrrEEzQsCrVNEgcY8K5ZHGs Sep 9 05:12:52.352579 sshd-session[4253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:12:52.357516 systemd-logind[1484]: New session 21 of user core. Sep 9 05:12:52.366871 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 9 05:12:52.498752 sshd[4256]: Connection closed by 10.0.0.1 port 34018 Sep 9 05:12:52.498595 sshd-session[4253]: pam_unix(sshd:session): session closed for user core Sep 9 05:12:52.511581 systemd[1]: sshd@20-10.0.0.133:22-10.0.0.1:34018.service: Deactivated successfully. Sep 9 05:12:52.513473 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 05:12:52.517912 systemd-logind[1484]: Session 21 logged out. Waiting for processes to exit. Sep 9 05:12:52.520326 systemd[1]: Started sshd@21-10.0.0.133:22-10.0.0.1:34022.service - OpenSSH per-connection server daemon (10.0.0.1:34022). Sep 9 05:12:52.521022 systemd-logind[1484]: Removed session 21. Sep 9 05:12:52.576940 sshd[4269]: Accepted publickey for core from 10.0.0.1 port 34022 ssh2: RSA SHA256:y2XmME+qZ8Vxpxr1aV4RZrrEEzQsCrVNEgcY8K5ZHGs Sep 9 05:12:52.578030 sshd-session[4269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:12:52.582806 systemd-logind[1484]: New session 22 of user core. Sep 9 05:12:52.592933 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 9 05:12:54.501887 containerd[1507]: time="2025-09-09T05:12:54.501849309Z" level=info msg="StopContainer for \"06e567ebbc3df6b71b07e692f6b7032183efbb0e85e1b970af43532b9facecf9\" with timeout 30 (s)" Sep 9 05:12:54.502639 containerd[1507]: time="2025-09-09T05:12:54.502514481Z" level=info msg="Stop container \"06e567ebbc3df6b71b07e692f6b7032183efbb0e85e1b970af43532b9facecf9\" with signal terminated" Sep 9 05:12:54.521220 systemd[1]: cri-containerd-06e567ebbc3df6b71b07e692f6b7032183efbb0e85e1b970af43532b9facecf9.scope: Deactivated successfully. Sep 9 05:12:54.524807 containerd[1507]: time="2025-09-09T05:12:54.524655678Z" level=info msg="received exit event container_id:\"06e567ebbc3df6b71b07e692f6b7032183efbb0e85e1b970af43532b9facecf9\" id:\"06e567ebbc3df6b71b07e692f6b7032183efbb0e85e1b970af43532b9facecf9\" pid:3263 exited_at:{seconds:1757394774 nanos:524440954}" Sep 9 05:12:54.524908 containerd[1507]: time="2025-09-09T05:12:54.524816721Z" level=info msg="TaskExit event in podsandbox handler container_id:\"06e567ebbc3df6b71b07e692f6b7032183efbb0e85e1b970af43532b9facecf9\" id:\"06e567ebbc3df6b71b07e692f6b7032183efbb0e85e1b970af43532b9facecf9\" pid:3263 exited_at:{seconds:1757394774 nanos:524440954}" Sep 9 05:12:54.537851 containerd[1507]: time="2025-09-09T05:12:54.537793033Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3593d9db945d03e3b7452f37497800bef67a03013eafe7b79e9b7b7ddc4cba64\" id:\"dd83314d1ff0fc12c44d964e35433a1f018d70bd0633fb73319135f6ecf115f1\" pid:4300 exited_at:{seconds:1757394774 nanos:536969538}" Sep 9 05:12:54.540610 containerd[1507]: time="2025-09-09T05:12:54.540349919Z" level=info msg="StopContainer for \"3593d9db945d03e3b7452f37497800bef67a03013eafe7b79e9b7b7ddc4cba64\" with timeout 2 (s)" Sep 9 05:12:54.540892 containerd[1507]: time="2025-09-09T05:12:54.540852048Z" level=info msg="Stop container \"3593d9db945d03e3b7452f37497800bef67a03013eafe7b79e9b7b7ddc4cba64\" with signal terminated" Sep 9 05:12:54.547570 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06e567ebbc3df6b71b07e692f6b7032183efbb0e85e1b970af43532b9facecf9-rootfs.mount: Deactivated successfully. Sep 9 05:12:54.548295 systemd-networkd[1454]: lxc_health: Link DOWN Sep 9 05:12:54.548299 systemd-networkd[1454]: lxc_health: Lost carrier Sep 9 05:12:54.558956 containerd[1507]: time="2025-09-09T05:12:54.558923092Z" level=info msg="StopContainer for \"06e567ebbc3df6b71b07e692f6b7032183efbb0e85e1b970af43532b9facecf9\" returns successfully" Sep 9 05:12:54.563615 containerd[1507]: time="2025-09-09T05:12:54.562823442Z" level=info msg="StopPodSandbox for \"489ba88204e96e9ec7a0ba3d9da4af4f00cda8a202b8fca162aaebf4f7b10d3f\"" Sep 9 05:12:54.564422 systemd[1]: cri-containerd-3593d9db945d03e3b7452f37497800bef67a03013eafe7b79e9b7b7ddc4cba64.scope: Deactivated successfully. Sep 9 05:12:54.564852 systemd[1]: cri-containerd-3593d9db945d03e3b7452f37497800bef67a03013eafe7b79e9b7b7ddc4cba64.scope: Consumed 6.166s CPU time, 123.3M memory peak, 136K read from disk, 12.9M written to disk. Sep 9 05:12:54.566726 containerd[1507]: time="2025-09-09T05:12:54.566653430Z" level=info msg="received exit event container_id:\"3593d9db945d03e3b7452f37497800bef67a03013eafe7b79e9b7b7ddc4cba64\" id:\"3593d9db945d03e3b7452f37497800bef67a03013eafe7b79e9b7b7ddc4cba64\" pid:3338 exited_at:{seconds:1757394774 nanos:566483387}" Sep 9 05:12:54.567697 containerd[1507]: time="2025-09-09T05:12:54.566892074Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3593d9db945d03e3b7452f37497800bef67a03013eafe7b79e9b7b7ddc4cba64\" id:\"3593d9db945d03e3b7452f37497800bef67a03013eafe7b79e9b7b7ddc4cba64\" pid:3338 exited_at:{seconds:1757394774 nanos:566483387}" Sep 9 05:12:54.580508 containerd[1507]: time="2025-09-09T05:12:54.569410480Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 05:12:54.581444 containerd[1507]: time="2025-09-09T05:12:54.581415055Z" level=info msg="Container to stop \"06e567ebbc3df6b71b07e692f6b7032183efbb0e85e1b970af43532b9facecf9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 05:12:54.587963 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3593d9db945d03e3b7452f37497800bef67a03013eafe7b79e9b7b7ddc4cba64-rootfs.mount: Deactivated successfully. Sep 9 05:12:54.593645 systemd[1]: cri-containerd-489ba88204e96e9ec7a0ba3d9da4af4f00cda8a202b8fca162aaebf4f7b10d3f.scope: Deactivated successfully. Sep 9 05:12:54.595506 containerd[1507]: time="2025-09-09T05:12:54.595477026Z" level=info msg="TaskExit event in podsandbox handler container_id:\"489ba88204e96e9ec7a0ba3d9da4af4f00cda8a202b8fca162aaebf4f7b10d3f\" id:\"489ba88204e96e9ec7a0ba3d9da4af4f00cda8a202b8fca162aaebf4f7b10d3f\" pid:2894 exit_status:137 exited_at:{seconds:1757394774 nanos:595196221}" Sep 9 05:12:54.598767 containerd[1507]: time="2025-09-09T05:12:54.598687644Z" level=info msg="StopContainer for \"3593d9db945d03e3b7452f37497800bef67a03013eafe7b79e9b7b7ddc4cba64\" returns successfully" Sep 9 05:12:54.599487 containerd[1507]: time="2025-09-09T05:12:54.599464698Z" level=info msg="StopPodSandbox for \"a109cae57815729a3d5373bff52cd5eacecf29d5e547fe35529dfc754fd16968\"" Sep 9 05:12:54.599540 containerd[1507]: time="2025-09-09T05:12:54.599524619Z" level=info msg="Container to stop \"69554c2f91cc5041b13244407ce376d1bbd6228c2762420f3b6791e085a2c193\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 05:12:54.599574 containerd[1507]: time="2025-09-09T05:12:54.599539219Z" level=info msg="Container to stop \"da0af6a69772586c17360f9dd37df2384e04045a32705d0b10c390ad77b5b499\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 05:12:54.599782 containerd[1507]: time="2025-09-09T05:12:54.599763103Z" level=info msg="Container to stop \"ae3c7582cbab312c74bc7e4158c062dc009035c6879b23d7903823de1e1568dc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 05:12:54.599828 containerd[1507]: time="2025-09-09T05:12:54.599784624Z" level=info msg="Container to stop \"3593d9db945d03e3b7452f37497800bef67a03013eafe7b79e9b7b7ddc4cba64\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 05:12:54.599828 containerd[1507]: time="2025-09-09T05:12:54.599793704Z" level=info msg="Container to stop \"95a11d1f1195bc4cad8705bb35963756610d9ef3e5984444e4d710475634f567\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 05:12:54.605875 systemd[1]: cri-containerd-a109cae57815729a3d5373bff52cd5eacecf29d5e547fe35529dfc754fd16968.scope: Deactivated successfully. Sep 9 05:12:54.620994 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-489ba88204e96e9ec7a0ba3d9da4af4f00cda8a202b8fca162aaebf4f7b10d3f-rootfs.mount: Deactivated successfully. Sep 9 05:12:54.626083 containerd[1507]: time="2025-09-09T05:12:54.626045334Z" level=info msg="shim disconnected" id=489ba88204e96e9ec7a0ba3d9da4af4f00cda8a202b8fca162aaebf4f7b10d3f namespace=k8s.io Sep 9 05:12:54.626224 containerd[1507]: time="2025-09-09T05:12:54.626080655Z" level=warning msg="cleaning up after shim disconnected" id=489ba88204e96e9ec7a0ba3d9da4af4f00cda8a202b8fca162aaebf4f7b10d3f namespace=k8s.io Sep 9 05:12:54.626251 containerd[1507]: time="2025-09-09T05:12:54.626110615Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 05:12:54.626792 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a109cae57815729a3d5373bff52cd5eacecf29d5e547fe35529dfc754fd16968-rootfs.mount: Deactivated successfully. Sep 9 05:12:54.628788 containerd[1507]: time="2025-09-09T05:12:54.628722702Z" level=info msg="shim disconnected" id=a109cae57815729a3d5373bff52cd5eacecf29d5e547fe35529dfc754fd16968 namespace=k8s.io Sep 9 05:12:54.629246 containerd[1507]: time="2025-09-09T05:12:54.628750862Z" level=warning msg="cleaning up after shim disconnected" id=a109cae57815729a3d5373bff52cd5eacecf29d5e547fe35529dfc754fd16968 namespace=k8s.io Sep 9 05:12:54.629246 containerd[1507]: time="2025-09-09T05:12:54.629030467Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 05:12:54.639986 containerd[1507]: time="2025-09-09T05:12:54.639942343Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a109cae57815729a3d5373bff52cd5eacecf29d5e547fe35529dfc754fd16968\" id:\"a109cae57815729a3d5373bff52cd5eacecf29d5e547fe35529dfc754fd16968\" pid:2824 exit_status:137 exited_at:{seconds:1757394774 nanos:607840968}" Sep 9 05:12:54.640259 containerd[1507]: time="2025-09-09T05:12:54.640237428Z" level=info msg="received exit event sandbox_id:\"a109cae57815729a3d5373bff52cd5eacecf29d5e547fe35529dfc754fd16968\" exit_status:137 exited_at:{seconds:1757394774 nanos:607840968}" Sep 9 05:12:54.640354 containerd[1507]: time="2025-09-09T05:12:54.640327070Z" level=info msg="received exit event sandbox_id:\"489ba88204e96e9ec7a0ba3d9da4af4f00cda8a202b8fca162aaebf4f7b10d3f\" exit_status:137 exited_at:{seconds:1757394774 nanos:595196221}" Sep 9 05:12:54.640780 containerd[1507]: time="2025-09-09T05:12:54.640754637Z" level=info msg="TearDown network for sandbox \"a109cae57815729a3d5373bff52cd5eacecf29d5e547fe35529dfc754fd16968\" successfully" Sep 9 05:12:54.640867 containerd[1507]: time="2025-09-09T05:12:54.640852279Z" level=info msg="StopPodSandbox for \"a109cae57815729a3d5373bff52cd5eacecf29d5e547fe35529dfc754fd16968\" returns successfully" Sep 9 05:12:54.641023 containerd[1507]: time="2025-09-09T05:12:54.640810398Z" level=info msg="TearDown network for sandbox \"489ba88204e96e9ec7a0ba3d9da4af4f00cda8a202b8fca162aaebf4f7b10d3f\" successfully" Sep 9 05:12:54.641086 containerd[1507]: time="2025-09-09T05:12:54.641073123Z" level=info msg="StopPodSandbox for \"489ba88204e96e9ec7a0ba3d9da4af4f00cda8a202b8fca162aaebf4f7b10d3f\" returns successfully" Sep 9 05:12:54.641852 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-489ba88204e96e9ec7a0ba3d9da4af4f00cda8a202b8fca162aaebf4f7b10d3f-shm.mount: Deactivated successfully. Sep 9 05:12:54.641936 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a109cae57815729a3d5373bff52cd5eacecf29d5e547fe35529dfc754fd16968-shm.mount: Deactivated successfully. Sep 9 05:12:54.710847 kubelet[2666]: I0909 05:12:54.710817 2666 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/938d4635-5470-4017-8c4f-e2705575ba8a-cilium-run\") pod \"938d4635-5470-4017-8c4f-e2705575ba8a\" (UID: \"938d4635-5470-4017-8c4f-e2705575ba8a\") " Sep 9 05:12:54.710847 kubelet[2666]: I0909 05:12:54.710853 2666 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/938d4635-5470-4017-8c4f-e2705575ba8a-clustermesh-secrets\") pod \"938d4635-5470-4017-8c4f-e2705575ba8a\" (UID: \"938d4635-5470-4017-8c4f-e2705575ba8a\") " Sep 9 05:12:54.711301 kubelet[2666]: I0909 05:12:54.710869 2666 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/938d4635-5470-4017-8c4f-e2705575ba8a-xtables-lock\") pod \"938d4635-5470-4017-8c4f-e2705575ba8a\" (UID: \"938d4635-5470-4017-8c4f-e2705575ba8a\") " Sep 9 05:12:54.711301 kubelet[2666]: I0909 05:12:54.710883 2666 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/938d4635-5470-4017-8c4f-e2705575ba8a-cni-path\") pod \"938d4635-5470-4017-8c4f-e2705575ba8a\" (UID: \"938d4635-5470-4017-8c4f-e2705575ba8a\") " Sep 9 05:12:54.711301 kubelet[2666]: I0909 05:12:54.710897 2666 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/938d4635-5470-4017-8c4f-e2705575ba8a-hostproc\") pod \"938d4635-5470-4017-8c4f-e2705575ba8a\" (UID: \"938d4635-5470-4017-8c4f-e2705575ba8a\") " Sep 9 05:12:54.711301 kubelet[2666]: I0909 05:12:54.710913 2666 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zj97z\" (UniqueName: \"kubernetes.io/projected/938d4635-5470-4017-8c4f-e2705575ba8a-kube-api-access-zj97z\") pod \"938d4635-5470-4017-8c4f-e2705575ba8a\" (UID: \"938d4635-5470-4017-8c4f-e2705575ba8a\") " Sep 9 05:12:54.711301 kubelet[2666]: I0909 05:12:54.710928 2666 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/938d4635-5470-4017-8c4f-e2705575ba8a-etc-cni-netd\") pod \"938d4635-5470-4017-8c4f-e2705575ba8a\" (UID: \"938d4635-5470-4017-8c4f-e2705575ba8a\") " Sep 9 05:12:54.711301 kubelet[2666]: I0909 05:12:54.710943 2666 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/938d4635-5470-4017-8c4f-e2705575ba8a-lib-modules\") pod \"938d4635-5470-4017-8c4f-e2705575ba8a\" (UID: \"938d4635-5470-4017-8c4f-e2705575ba8a\") " Sep 9 05:12:54.711462 kubelet[2666]: I0909 05:12:54.710962 2666 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnpn7\" (UniqueName: \"kubernetes.io/projected/a25a10f8-0125-4c69-8369-96332003a4ce-kube-api-access-xnpn7\") pod \"a25a10f8-0125-4c69-8369-96332003a4ce\" (UID: \"a25a10f8-0125-4c69-8369-96332003a4ce\") " Sep 9 05:12:54.711462 kubelet[2666]: I0909 05:12:54.710979 2666 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/938d4635-5470-4017-8c4f-e2705575ba8a-cilium-config-path\") pod \"938d4635-5470-4017-8c4f-e2705575ba8a\" (UID: \"938d4635-5470-4017-8c4f-e2705575ba8a\") " Sep 9 05:12:54.711462 kubelet[2666]: I0909 05:12:54.710993 2666 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/938d4635-5470-4017-8c4f-e2705575ba8a-host-proc-sys-kernel\") pod \"938d4635-5470-4017-8c4f-e2705575ba8a\" (UID: \"938d4635-5470-4017-8c4f-e2705575ba8a\") " Sep 9 05:12:54.711462 kubelet[2666]: I0909 05:12:54.711007 2666 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/938d4635-5470-4017-8c4f-e2705575ba8a-bpf-maps\") pod \"938d4635-5470-4017-8c4f-e2705575ba8a\" (UID: \"938d4635-5470-4017-8c4f-e2705575ba8a\") " Sep 9 05:12:54.711462 kubelet[2666]: I0909 05:12:54.711022 2666 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/938d4635-5470-4017-8c4f-e2705575ba8a-cilium-cgroup\") pod \"938d4635-5470-4017-8c4f-e2705575ba8a\" (UID: \"938d4635-5470-4017-8c4f-e2705575ba8a\") " Sep 9 05:12:54.711462 kubelet[2666]: I0909 05:12:54.711039 2666 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a25a10f8-0125-4c69-8369-96332003a4ce-cilium-config-path\") pod \"a25a10f8-0125-4c69-8369-96332003a4ce\" (UID: \"a25a10f8-0125-4c69-8369-96332003a4ce\") " Sep 9 05:12:54.711615 kubelet[2666]: I0909 05:12:54.711055 2666 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/938d4635-5470-4017-8c4f-e2705575ba8a-host-proc-sys-net\") pod \"938d4635-5470-4017-8c4f-e2705575ba8a\" (UID: \"938d4635-5470-4017-8c4f-e2705575ba8a\") " Sep 9 05:12:54.711615 kubelet[2666]: I0909 05:12:54.711112 2666 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/938d4635-5470-4017-8c4f-e2705575ba8a-hubble-tls\") pod \"938d4635-5470-4017-8c4f-e2705575ba8a\" (UID: \"938d4635-5470-4017-8c4f-e2705575ba8a\") " Sep 9 05:12:54.716340 kubelet[2666]: I0909 05:12:54.716051 2666 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/938d4635-5470-4017-8c4f-e2705575ba8a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "938d4635-5470-4017-8c4f-e2705575ba8a" (UID: "938d4635-5470-4017-8c4f-e2705575ba8a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 05:12:54.717990 kubelet[2666]: I0909 05:12:54.716542 2666 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/938d4635-5470-4017-8c4f-e2705575ba8a-hostproc" (OuterVolumeSpecName: "hostproc") pod "938d4635-5470-4017-8c4f-e2705575ba8a" (UID: "938d4635-5470-4017-8c4f-e2705575ba8a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 05:12:54.717990 kubelet[2666]: I0909 05:12:54.717045 2666 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/938d4635-5470-4017-8c4f-e2705575ba8a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "938d4635-5470-4017-8c4f-e2705575ba8a" (UID: "938d4635-5470-4017-8c4f-e2705575ba8a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 05:12:54.717990 kubelet[2666]: I0909 05:12:54.717784 2666 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/938d4635-5470-4017-8c4f-e2705575ba8a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "938d4635-5470-4017-8c4f-e2705575ba8a" (UID: "938d4635-5470-4017-8c4f-e2705575ba8a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 05:12:54.717990 kubelet[2666]: I0909 05:12:54.717811 2666 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/938d4635-5470-4017-8c4f-e2705575ba8a-cni-path" (OuterVolumeSpecName: "cni-path") pod "938d4635-5470-4017-8c4f-e2705575ba8a" (UID: "938d4635-5470-4017-8c4f-e2705575ba8a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 05:12:54.717990 kubelet[2666]: I0909 05:12:54.717827 2666 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/938d4635-5470-4017-8c4f-e2705575ba8a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "938d4635-5470-4017-8c4f-e2705575ba8a" (UID: "938d4635-5470-4017-8c4f-e2705575ba8a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 05:12:54.718118 kubelet[2666]: I0909 05:12:54.717851 2666 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/938d4635-5470-4017-8c4f-e2705575ba8a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "938d4635-5470-4017-8c4f-e2705575ba8a" (UID: "938d4635-5470-4017-8c4f-e2705575ba8a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 05:12:54.718118 kubelet[2666]: I0909 05:12:54.717915 2666 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/938d4635-5470-4017-8c4f-e2705575ba8a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "938d4635-5470-4017-8c4f-e2705575ba8a" (UID: "938d4635-5470-4017-8c4f-e2705575ba8a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 05:12:54.718118 kubelet[2666]: I0909 05:12:54.717991 2666 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/938d4635-5470-4017-8c4f-e2705575ba8a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "938d4635-5470-4017-8c4f-e2705575ba8a" (UID: "938d4635-5470-4017-8c4f-e2705575ba8a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 05:12:54.718401 kubelet[2666]: I0909 05:12:54.718371 2666 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/938d4635-5470-4017-8c4f-e2705575ba8a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "938d4635-5470-4017-8c4f-e2705575ba8a" (UID: "938d4635-5470-4017-8c4f-e2705575ba8a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 05:12:54.718755 kubelet[2666]: I0909 05:12:54.718729 2666 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/938d4635-5470-4017-8c4f-e2705575ba8a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "938d4635-5470-4017-8c4f-e2705575ba8a" (UID: "938d4635-5470-4017-8c4f-e2705575ba8a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 05:12:54.719223 kubelet[2666]: I0909 05:12:54.719190 2666 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/938d4635-5470-4017-8c4f-e2705575ba8a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "938d4635-5470-4017-8c4f-e2705575ba8a" (UID: "938d4635-5470-4017-8c4f-e2705575ba8a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 9 05:12:54.720637 kubelet[2666]: I0909 05:12:54.720376 2666 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a25a10f8-0125-4c69-8369-96332003a4ce-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a25a10f8-0125-4c69-8369-96332003a4ce" (UID: "a25a10f8-0125-4c69-8369-96332003a4ce"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 05:12:54.720917 kubelet[2666]: I0909 05:12:54.720869 2666 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a25a10f8-0125-4c69-8369-96332003a4ce-kube-api-access-xnpn7" (OuterVolumeSpecName: "kube-api-access-xnpn7") pod "a25a10f8-0125-4c69-8369-96332003a4ce" (UID: "a25a10f8-0125-4c69-8369-96332003a4ce"). InnerVolumeSpecName "kube-api-access-xnpn7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 05:12:54.722717 kubelet[2666]: I0909 05:12:54.721845 2666 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/938d4635-5470-4017-8c4f-e2705575ba8a-kube-api-access-zj97z" (OuterVolumeSpecName: "kube-api-access-zj97z") pod "938d4635-5470-4017-8c4f-e2705575ba8a" (UID: "938d4635-5470-4017-8c4f-e2705575ba8a"). InnerVolumeSpecName "kube-api-access-zj97z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 05:12:54.722893 kubelet[2666]: I0909 05:12:54.722859 2666 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/938d4635-5470-4017-8c4f-e2705575ba8a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "938d4635-5470-4017-8c4f-e2705575ba8a" (UID: "938d4635-5470-4017-8c4f-e2705575ba8a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 05:12:54.812034 kubelet[2666]: I0909 05:12:54.811932 2666 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/938d4635-5470-4017-8c4f-e2705575ba8a-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 9 05:12:54.812178 kubelet[2666]: I0909 05:12:54.812165 2666 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/938d4635-5470-4017-8c4f-e2705575ba8a-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 9 05:12:54.812234 kubelet[2666]: I0909 05:12:54.812224 2666 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/938d4635-5470-4017-8c4f-e2705575ba8a-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 9 05:12:54.812281 kubelet[2666]: I0909 05:12:54.812273 2666 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a25a10f8-0125-4c69-8369-96332003a4ce-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 9 05:12:54.812333 kubelet[2666]: I0909 05:12:54.812324 2666 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/938d4635-5470-4017-8c4f-e2705575ba8a-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 9 05:12:54.812379 kubelet[2666]: I0909 05:12:54.812371 2666 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/938d4635-5470-4017-8c4f-e2705575ba8a-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 9 05:12:54.812433 kubelet[2666]: I0909 05:12:54.812424 2666 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/938d4635-5470-4017-8c4f-e2705575ba8a-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 9 05:12:54.812485 kubelet[2666]: I0909 05:12:54.812476 2666 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/938d4635-5470-4017-8c4f-e2705575ba8a-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 9 05:12:54.812540 kubelet[2666]: I0909 05:12:54.812530 2666 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/938d4635-5470-4017-8c4f-e2705575ba8a-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 9 05:12:54.812592 kubelet[2666]: I0909 05:12:54.812582 2666 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/938d4635-5470-4017-8c4f-e2705575ba8a-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 9 05:12:54.812644 kubelet[2666]: I0909 05:12:54.812634 2666 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/938d4635-5470-4017-8c4f-e2705575ba8a-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 9 05:12:54.812730 kubelet[2666]: I0909 05:12:54.812699 2666 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zj97z\" (UniqueName: \"kubernetes.io/projected/938d4635-5470-4017-8c4f-e2705575ba8a-kube-api-access-zj97z\") on node \"localhost\" DevicePath \"\"" Sep 9 05:12:54.812798 kubelet[2666]: I0909 05:12:54.812788 2666 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/938d4635-5470-4017-8c4f-e2705575ba8a-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 9 05:12:54.812848 kubelet[2666]: I0909 05:12:54.812840 2666 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/938d4635-5470-4017-8c4f-e2705575ba8a-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 9 05:12:54.812900 kubelet[2666]: I0909 05:12:54.812891 2666 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnpn7\" (UniqueName: \"kubernetes.io/projected/a25a10f8-0125-4c69-8369-96332003a4ce-kube-api-access-xnpn7\") on node \"localhost\" DevicePath \"\"" Sep 9 05:12:54.812945 kubelet[2666]: I0909 05:12:54.812937 2666 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/938d4635-5470-4017-8c4f-e2705575ba8a-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 9 05:12:55.178667 systemd[1]: Removed slice kubepods-burstable-pod938d4635_5470_4017_8c4f_e2705575ba8a.slice - libcontainer container kubepods-burstable-pod938d4635_5470_4017_8c4f_e2705575ba8a.slice. Sep 9 05:12:55.178993 systemd[1]: kubepods-burstable-pod938d4635_5470_4017_8c4f_e2705575ba8a.slice: Consumed 6.265s CPU time, 123.6M memory peak, 6.5M read from disk, 12.9M written to disk. Sep 9 05:12:55.179941 systemd[1]: Removed slice kubepods-besteffort-poda25a10f8_0125_4c69_8369_96332003a4ce.slice - libcontainer container kubepods-besteffort-poda25a10f8_0125_4c69_8369_96332003a4ce.slice. Sep 9 05:12:55.224397 kubelet[2666]: E0909 05:12:55.224371 2666 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 05:12:55.378360 kubelet[2666]: I0909 05:12:55.378334 2666 scope.go:117] "RemoveContainer" containerID="06e567ebbc3df6b71b07e692f6b7032183efbb0e85e1b970af43532b9facecf9" Sep 9 05:12:55.382352 containerd[1507]: time="2025-09-09T05:12:55.382317475Z" level=info msg="RemoveContainer for \"06e567ebbc3df6b71b07e692f6b7032183efbb0e85e1b970af43532b9facecf9\"" Sep 9 05:12:55.389931 containerd[1507]: time="2025-09-09T05:12:55.389900206Z" level=info msg="RemoveContainer for \"06e567ebbc3df6b71b07e692f6b7032183efbb0e85e1b970af43532b9facecf9\" returns successfully" Sep 9 05:12:55.390219 kubelet[2666]: I0909 05:12:55.390143 2666 scope.go:117] "RemoveContainer" containerID="06e567ebbc3df6b71b07e692f6b7032183efbb0e85e1b970af43532b9facecf9" Sep 9 05:12:55.390509 containerd[1507]: time="2025-09-09T05:12:55.390474896Z" level=error msg="ContainerStatus for \"06e567ebbc3df6b71b07e692f6b7032183efbb0e85e1b970af43532b9facecf9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"06e567ebbc3df6b71b07e692f6b7032183efbb0e85e1b970af43532b9facecf9\": not found" Sep 9 05:12:55.394792 kubelet[2666]: E0909 05:12:55.394758 2666 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"06e567ebbc3df6b71b07e692f6b7032183efbb0e85e1b970af43532b9facecf9\": not found" containerID="06e567ebbc3df6b71b07e692f6b7032183efbb0e85e1b970af43532b9facecf9" Sep 9 05:12:55.395112 kubelet[2666]: I0909 05:12:55.394802 2666 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"06e567ebbc3df6b71b07e692f6b7032183efbb0e85e1b970af43532b9facecf9"} err="failed to get container status \"06e567ebbc3df6b71b07e692f6b7032183efbb0e85e1b970af43532b9facecf9\": rpc error: code = NotFound desc = an error occurred when try to find container \"06e567ebbc3df6b71b07e692f6b7032183efbb0e85e1b970af43532b9facecf9\": not found" Sep 9 05:12:55.395112 kubelet[2666]: I0909 05:12:55.394837 2666 scope.go:117] "RemoveContainer" containerID="3593d9db945d03e3b7452f37497800bef67a03013eafe7b79e9b7b7ddc4cba64" Sep 9 05:12:55.396598 containerd[1507]: time="2025-09-09T05:12:55.396480720Z" level=info msg="RemoveContainer for \"3593d9db945d03e3b7452f37497800bef67a03013eafe7b79e9b7b7ddc4cba64\"" Sep 9 05:12:55.400647 containerd[1507]: time="2025-09-09T05:12:55.400622512Z" level=info msg="RemoveContainer for \"3593d9db945d03e3b7452f37497800bef67a03013eafe7b79e9b7b7ddc4cba64\" returns successfully" Sep 9 05:12:55.400907 kubelet[2666]: I0909 05:12:55.400868 2666 scope.go:117] "RemoveContainer" containerID="da0af6a69772586c17360f9dd37df2384e04045a32705d0b10c390ad77b5b499" Sep 9 05:12:55.403233 containerd[1507]: time="2025-09-09T05:12:55.403121995Z" level=info msg="RemoveContainer for \"da0af6a69772586c17360f9dd37df2384e04045a32705d0b10c390ad77b5b499\"" Sep 9 05:12:55.407492 containerd[1507]: time="2025-09-09T05:12:55.407431389Z" level=info msg="RemoveContainer for \"da0af6a69772586c17360f9dd37df2384e04045a32705d0b10c390ad77b5b499\" returns successfully" Sep 9 05:12:55.407614 kubelet[2666]: I0909 05:12:55.407593 2666 scope.go:117] "RemoveContainer" containerID="69554c2f91cc5041b13244407ce376d1bbd6228c2762420f3b6791e085a2c193" Sep 9 05:12:55.416627 containerd[1507]: time="2025-09-09T05:12:55.416546307Z" level=info msg="RemoveContainer for \"69554c2f91cc5041b13244407ce376d1bbd6228c2762420f3b6791e085a2c193\"" Sep 9 05:12:55.422665 containerd[1507]: time="2025-09-09T05:12:55.422635692Z" level=info msg="RemoveContainer for \"69554c2f91cc5041b13244407ce376d1bbd6228c2762420f3b6791e085a2c193\" returns successfully" Sep 9 05:12:55.422820 kubelet[2666]: I0909 05:12:55.422804 2666 scope.go:117] "RemoveContainer" containerID="ae3c7582cbab312c74bc7e4158c062dc009035c6879b23d7903823de1e1568dc" Sep 9 05:12:55.424293 containerd[1507]: time="2025-09-09T05:12:55.424271320Z" level=info msg="RemoveContainer for \"ae3c7582cbab312c74bc7e4158c062dc009035c6879b23d7903823de1e1568dc\"" Sep 9 05:12:55.426778 containerd[1507]: time="2025-09-09T05:12:55.426755803Z" level=info msg="RemoveContainer for \"ae3c7582cbab312c74bc7e4158c062dc009035c6879b23d7903823de1e1568dc\" returns successfully" Sep 9 05:12:55.427003 kubelet[2666]: I0909 05:12:55.426881 2666 scope.go:117] "RemoveContainer" containerID="95a11d1f1195bc4cad8705bb35963756610d9ef3e5984444e4d710475634f567" Sep 9 05:12:55.428055 containerd[1507]: time="2025-09-09T05:12:55.428035585Z" level=info msg="RemoveContainer for \"95a11d1f1195bc4cad8705bb35963756610d9ef3e5984444e4d710475634f567\"" Sep 9 05:12:55.430607 containerd[1507]: time="2025-09-09T05:12:55.430492467Z" level=info msg="RemoveContainer for \"95a11d1f1195bc4cad8705bb35963756610d9ef3e5984444e4d710475634f567\" returns successfully" Sep 9 05:12:55.431166 kubelet[2666]: I0909 05:12:55.431078 2666 scope.go:117] "RemoveContainer" containerID="3593d9db945d03e3b7452f37497800bef67a03013eafe7b79e9b7b7ddc4cba64" Sep 9 05:12:55.431339 containerd[1507]: time="2025-09-09T05:12:55.431290241Z" level=error msg="ContainerStatus for \"3593d9db945d03e3b7452f37497800bef67a03013eafe7b79e9b7b7ddc4cba64\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3593d9db945d03e3b7452f37497800bef67a03013eafe7b79e9b7b7ddc4cba64\": not found" Sep 9 05:12:55.431482 kubelet[2666]: E0909 05:12:55.431450 2666 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3593d9db945d03e3b7452f37497800bef67a03013eafe7b79e9b7b7ddc4cba64\": not found" containerID="3593d9db945d03e3b7452f37497800bef67a03013eafe7b79e9b7b7ddc4cba64" Sep 9 05:12:55.431574 kubelet[2666]: I0909 05:12:55.431552 2666 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3593d9db945d03e3b7452f37497800bef67a03013eafe7b79e9b7b7ddc4cba64"} err="failed to get container status \"3593d9db945d03e3b7452f37497800bef67a03013eafe7b79e9b7b7ddc4cba64\": rpc error: code = NotFound desc = an error occurred when try to find container \"3593d9db945d03e3b7452f37497800bef67a03013eafe7b79e9b7b7ddc4cba64\": not found" Sep 9 05:12:55.431687 kubelet[2666]: I0909 05:12:55.431619 2666 scope.go:117] "RemoveContainer" containerID="da0af6a69772586c17360f9dd37df2384e04045a32705d0b10c390ad77b5b499" Sep 9 05:12:55.431826 containerd[1507]: time="2025-09-09T05:12:55.431780290Z" level=error msg="ContainerStatus for \"da0af6a69772586c17360f9dd37df2384e04045a32705d0b10c390ad77b5b499\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"da0af6a69772586c17360f9dd37df2384e04045a32705d0b10c390ad77b5b499\": not found" Sep 9 05:12:55.431986 kubelet[2666]: E0909 05:12:55.431891 2666 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"da0af6a69772586c17360f9dd37df2384e04045a32705d0b10c390ad77b5b499\": not found" containerID="da0af6a69772586c17360f9dd37df2384e04045a32705d0b10c390ad77b5b499" Sep 9 05:12:55.431986 kubelet[2666]: I0909 05:12:55.431911 2666 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"da0af6a69772586c17360f9dd37df2384e04045a32705d0b10c390ad77b5b499"} err="failed to get container status \"da0af6a69772586c17360f9dd37df2384e04045a32705d0b10c390ad77b5b499\": rpc error: code = NotFound desc = an error occurred when try to find container \"da0af6a69772586c17360f9dd37df2384e04045a32705d0b10c390ad77b5b499\": not found" Sep 9 05:12:55.431986 kubelet[2666]: I0909 05:12:55.431925 2666 scope.go:117] "RemoveContainer" containerID="69554c2f91cc5041b13244407ce376d1bbd6228c2762420f3b6791e085a2c193" Sep 9 05:12:55.432076 containerd[1507]: time="2025-09-09T05:12:55.432030214Z" level=error msg="ContainerStatus for \"69554c2f91cc5041b13244407ce376d1bbd6228c2762420f3b6791e085a2c193\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"69554c2f91cc5041b13244407ce376d1bbd6228c2762420f3b6791e085a2c193\": not found" Sep 9 05:12:55.432130 kubelet[2666]: E0909 05:12:55.432107 2666 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"69554c2f91cc5041b13244407ce376d1bbd6228c2762420f3b6791e085a2c193\": not found" containerID="69554c2f91cc5041b13244407ce376d1bbd6228c2762420f3b6791e085a2c193" Sep 9 05:12:55.432161 kubelet[2666]: I0909 05:12:55.432131 2666 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"69554c2f91cc5041b13244407ce376d1bbd6228c2762420f3b6791e085a2c193"} err="failed to get container status \"69554c2f91cc5041b13244407ce376d1bbd6228c2762420f3b6791e085a2c193\": rpc error: code = NotFound desc = an error occurred when try to find container \"69554c2f91cc5041b13244407ce376d1bbd6228c2762420f3b6791e085a2c193\": not found" Sep 9 05:12:55.432184 kubelet[2666]: I0909 05:12:55.432162 2666 scope.go:117] "RemoveContainer" containerID="ae3c7582cbab312c74bc7e4158c062dc009035c6879b23d7903823de1e1568dc" Sep 9 05:12:55.432288 containerd[1507]: time="2025-09-09T05:12:55.432265978Z" level=error msg="ContainerStatus for \"ae3c7582cbab312c74bc7e4158c062dc009035c6879b23d7903823de1e1568dc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ae3c7582cbab312c74bc7e4158c062dc009035c6879b23d7903823de1e1568dc\": not found" Sep 9 05:12:55.432372 kubelet[2666]: E0909 05:12:55.432346 2666 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ae3c7582cbab312c74bc7e4158c062dc009035c6879b23d7903823de1e1568dc\": not found" containerID="ae3c7582cbab312c74bc7e4158c062dc009035c6879b23d7903823de1e1568dc" Sep 9 05:12:55.432406 kubelet[2666]: I0909 05:12:55.432364 2666 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ae3c7582cbab312c74bc7e4158c062dc009035c6879b23d7903823de1e1568dc"} err="failed to get container status \"ae3c7582cbab312c74bc7e4158c062dc009035c6879b23d7903823de1e1568dc\": rpc error: code = NotFound desc = an error occurred when try to find container \"ae3c7582cbab312c74bc7e4158c062dc009035c6879b23d7903823de1e1568dc\": not found" Sep 9 05:12:55.432406 kubelet[2666]: I0909 05:12:55.432386 2666 scope.go:117] "RemoveContainer" containerID="95a11d1f1195bc4cad8705bb35963756610d9ef3e5984444e4d710475634f567" Sep 9 05:12:55.432628 containerd[1507]: time="2025-09-09T05:12:55.432525343Z" level=error msg="ContainerStatus for \"95a11d1f1195bc4cad8705bb35963756610d9ef3e5984444e4d710475634f567\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"95a11d1f1195bc4cad8705bb35963756610d9ef3e5984444e4d710475634f567\": not found" Sep 9 05:12:55.432803 kubelet[2666]: E0909 05:12:55.432763 2666 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"95a11d1f1195bc4cad8705bb35963756610d9ef3e5984444e4d710475634f567\": not found" containerID="95a11d1f1195bc4cad8705bb35963756610d9ef3e5984444e4d710475634f567" Sep 9 05:12:55.432803 kubelet[2666]: I0909 05:12:55.432785 2666 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"95a11d1f1195bc4cad8705bb35963756610d9ef3e5984444e4d710475634f567"} err="failed to get container status \"95a11d1f1195bc4cad8705bb35963756610d9ef3e5984444e4d710475634f567\": rpc error: code = NotFound desc = an error occurred when try to find container \"95a11d1f1195bc4cad8705bb35963756610d9ef3e5984444e4d710475634f567\": not found" Sep 9 05:12:55.546400 systemd[1]: var-lib-kubelet-pods-a25a10f8\x2d0125\x2d4c69\x2d8369\x2d96332003a4ce-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxnpn7.mount: Deactivated successfully. Sep 9 05:12:55.546494 systemd[1]: var-lib-kubelet-pods-938d4635\x2d5470\x2d4017\x2d8c4f\x2de2705575ba8a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzj97z.mount: Deactivated successfully. Sep 9 05:12:55.546544 systemd[1]: var-lib-kubelet-pods-938d4635\x2d5470\x2d4017\x2d8c4f\x2de2705575ba8a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 9 05:12:55.546596 systemd[1]: var-lib-kubelet-pods-938d4635\x2d5470\x2d4017\x2d8c4f\x2de2705575ba8a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 9 05:12:56.457730 sshd[4272]: Connection closed by 10.0.0.1 port 34022 Sep 9 05:12:56.457894 sshd-session[4269]: pam_unix(sshd:session): session closed for user core Sep 9 05:12:56.469675 systemd[1]: sshd@21-10.0.0.133:22-10.0.0.1:34022.service: Deactivated successfully. Sep 9 05:12:56.471129 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 05:12:56.471306 systemd[1]: session-22.scope: Consumed 1.221s CPU time, 24.7M memory peak. Sep 9 05:12:56.471779 systemd-logind[1484]: Session 22 logged out. Waiting for processes to exit. Sep 9 05:12:56.473796 systemd[1]: Started sshd@22-10.0.0.133:22-10.0.0.1:34028.service - OpenSSH per-connection server daemon (10.0.0.1:34028). Sep 9 05:12:56.474239 systemd-logind[1484]: Removed session 22. Sep 9 05:12:56.526577 sshd[4428]: Accepted publickey for core from 10.0.0.1 port 34028 ssh2: RSA SHA256:y2XmME+qZ8Vxpxr1aV4RZrrEEzQsCrVNEgcY8K5ZHGs Sep 9 05:12:56.527570 sshd-session[4428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:12:56.531051 systemd-logind[1484]: New session 23 of user core. Sep 9 05:12:56.541839 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 9 05:12:56.674888 kubelet[2666]: I0909 05:12:56.674837 2666 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-09T05:12:56Z","lastTransitionTime":"2025-09-09T05:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 9 05:12:57.125158 sshd[4431]: Connection closed by 10.0.0.1 port 34028 Sep 9 05:12:57.124556 sshd-session[4428]: pam_unix(sshd:session): session closed for user core Sep 9 05:12:57.131991 systemd[1]: sshd@22-10.0.0.133:22-10.0.0.1:34028.service: Deactivated successfully. Sep 9 05:12:57.133380 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 05:12:57.134990 systemd-logind[1484]: Session 23 logged out. Waiting for processes to exit. Sep 9 05:12:57.137979 systemd[1]: Started sshd@23-10.0.0.133:22-10.0.0.1:34044.service - OpenSSH per-connection server daemon (10.0.0.1:34044). Sep 9 05:12:57.139554 systemd-logind[1484]: Removed session 23. Sep 9 05:12:57.160534 systemd[1]: Created slice kubepods-burstable-pod0aef82a5_2659_4973_bbea_be688146163b.slice - libcontainer container kubepods-burstable-pod0aef82a5_2659_4973_bbea_be688146163b.slice. Sep 9 05:12:57.175155 kubelet[2666]: I0909 05:12:57.175120 2666 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="938d4635-5470-4017-8c4f-e2705575ba8a" path="/var/lib/kubelet/pods/938d4635-5470-4017-8c4f-e2705575ba8a/volumes" Sep 9 05:12:57.175881 kubelet[2666]: I0909 05:12:57.175849 2666 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a25a10f8-0125-4c69-8369-96332003a4ce" path="/var/lib/kubelet/pods/a25a10f8-0125-4c69-8369-96332003a4ce/volumes" Sep 9 05:12:57.201915 sshd[4443]: Accepted publickey for core from 10.0.0.1 port 34044 ssh2: RSA SHA256:y2XmME+qZ8Vxpxr1aV4RZrrEEzQsCrVNEgcY8K5ZHGs Sep 9 05:12:57.202355 sshd-session[4443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:12:57.206850 systemd-logind[1484]: New session 24 of user core. Sep 9 05:12:57.216917 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 9 05:12:57.224613 kubelet[2666]: I0909 05:12:57.224575 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0aef82a5-2659-4973-bbea-be688146163b-bpf-maps\") pod \"cilium-cn85z\" (UID: \"0aef82a5-2659-4973-bbea-be688146163b\") " pod="kube-system/cilium-cn85z" Sep 9 05:12:57.224613 kubelet[2666]: I0909 05:12:57.224606 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0aef82a5-2659-4973-bbea-be688146163b-cni-path\") pod \"cilium-cn85z\" (UID: \"0aef82a5-2659-4973-bbea-be688146163b\") " pod="kube-system/cilium-cn85z" Sep 9 05:12:57.224837 kubelet[2666]: I0909 05:12:57.224625 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0aef82a5-2659-4973-bbea-be688146163b-lib-modules\") pod \"cilium-cn85z\" (UID: \"0aef82a5-2659-4973-bbea-be688146163b\") " pod="kube-system/cilium-cn85z" Sep 9 05:12:57.224837 kubelet[2666]: I0909 05:12:57.224641 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0aef82a5-2659-4973-bbea-be688146163b-cilium-config-path\") pod \"cilium-cn85z\" (UID: \"0aef82a5-2659-4973-bbea-be688146163b\") " pod="kube-system/cilium-cn85z" Sep 9 05:12:57.224837 kubelet[2666]: I0909 05:12:57.224657 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0aef82a5-2659-4973-bbea-be688146163b-clustermesh-secrets\") pod \"cilium-cn85z\" (UID: \"0aef82a5-2659-4973-bbea-be688146163b\") " pod="kube-system/cilium-cn85z" Sep 9 05:12:57.224837 kubelet[2666]: I0909 05:12:57.224730 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0aef82a5-2659-4973-bbea-be688146163b-cilium-ipsec-secrets\") pod \"cilium-cn85z\" (UID: \"0aef82a5-2659-4973-bbea-be688146163b\") " pod="kube-system/cilium-cn85z" Sep 9 05:12:57.224837 kubelet[2666]: I0909 05:12:57.224785 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0aef82a5-2659-4973-bbea-be688146163b-host-proc-sys-net\") pod \"cilium-cn85z\" (UID: \"0aef82a5-2659-4973-bbea-be688146163b\") " pod="kube-system/cilium-cn85z" Sep 9 05:12:57.224940 kubelet[2666]: I0909 05:12:57.224814 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0aef82a5-2659-4973-bbea-be688146163b-cilium-cgroup\") pod \"cilium-cn85z\" (UID: \"0aef82a5-2659-4973-bbea-be688146163b\") " pod="kube-system/cilium-cn85z" Sep 9 05:12:57.224940 kubelet[2666]: I0909 05:12:57.224833 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wwdq\" (UniqueName: \"kubernetes.io/projected/0aef82a5-2659-4973-bbea-be688146163b-kube-api-access-5wwdq\") pod \"cilium-cn85z\" (UID: \"0aef82a5-2659-4973-bbea-be688146163b\") " pod="kube-system/cilium-cn85z" Sep 9 05:12:57.224940 kubelet[2666]: I0909 05:12:57.224850 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0aef82a5-2659-4973-bbea-be688146163b-etc-cni-netd\") pod \"cilium-cn85z\" (UID: \"0aef82a5-2659-4973-bbea-be688146163b\") " pod="kube-system/cilium-cn85z" Sep 9 05:12:57.224940 kubelet[2666]: I0909 05:12:57.224864 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0aef82a5-2659-4973-bbea-be688146163b-host-proc-sys-kernel\") pod \"cilium-cn85z\" (UID: \"0aef82a5-2659-4973-bbea-be688146163b\") " pod="kube-system/cilium-cn85z" Sep 9 05:12:57.224940 kubelet[2666]: I0909 05:12:57.224878 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0aef82a5-2659-4973-bbea-be688146163b-hubble-tls\") pod \"cilium-cn85z\" (UID: \"0aef82a5-2659-4973-bbea-be688146163b\") " pod="kube-system/cilium-cn85z" Sep 9 05:12:57.224940 kubelet[2666]: I0909 05:12:57.224895 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0aef82a5-2659-4973-bbea-be688146163b-hostproc\") pod \"cilium-cn85z\" (UID: \"0aef82a5-2659-4973-bbea-be688146163b\") " pod="kube-system/cilium-cn85z" Sep 9 05:12:57.225045 kubelet[2666]: I0909 05:12:57.224909 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0aef82a5-2659-4973-bbea-be688146163b-xtables-lock\") pod \"cilium-cn85z\" (UID: \"0aef82a5-2659-4973-bbea-be688146163b\") " pod="kube-system/cilium-cn85z" Sep 9 05:12:57.225045 kubelet[2666]: I0909 05:12:57.224926 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0aef82a5-2659-4973-bbea-be688146163b-cilium-run\") pod \"cilium-cn85z\" (UID: \"0aef82a5-2659-4973-bbea-be688146163b\") " pod="kube-system/cilium-cn85z" Sep 9 05:12:57.265582 sshd[4446]: Connection closed by 10.0.0.1 port 34044 Sep 9 05:12:57.266018 sshd-session[4443]: pam_unix(sshd:session): session closed for user core Sep 9 05:12:57.281830 systemd[1]: sshd@23-10.0.0.133:22-10.0.0.1:34044.service: Deactivated successfully. Sep 9 05:12:57.283329 systemd[1]: session-24.scope: Deactivated successfully. Sep 9 05:12:57.284457 systemd-logind[1484]: Session 24 logged out. Waiting for processes to exit. Sep 9 05:12:57.286063 systemd[1]: Started sshd@24-10.0.0.133:22-10.0.0.1:34060.service - OpenSSH per-connection server daemon (10.0.0.1:34060). Sep 9 05:12:57.286926 systemd-logind[1484]: Removed session 24. Sep 9 05:12:57.337024 sshd[4453]: Accepted publickey for core from 10.0.0.1 port 34060 ssh2: RSA SHA256:y2XmME+qZ8Vxpxr1aV4RZrrEEzQsCrVNEgcY8K5ZHGs Sep 9 05:12:57.338697 sshd-session[4453]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:12:57.346915 systemd-logind[1484]: New session 25 of user core. Sep 9 05:12:57.354830 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 9 05:12:57.465344 kubelet[2666]: E0909 05:12:57.464945 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:12:57.465472 containerd[1507]: time="2025-09-09T05:12:57.465436702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cn85z,Uid:0aef82a5-2659-4973-bbea-be688146163b,Namespace:kube-system,Attempt:0,}" Sep 9 05:12:57.479861 containerd[1507]: time="2025-09-09T05:12:57.479822813Z" level=info msg="connecting to shim 7384369db5b8160b228fdd4707125c4dbce024d95ca56f3d30e74d533cc4fbbb" address="unix:///run/containerd/s/bd5981a77e01ca4a102922dd66d3fee7c8e3e5b50593633f9d12cd2fe3128997" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:12:57.507887 systemd[1]: Started cri-containerd-7384369db5b8160b228fdd4707125c4dbce024d95ca56f3d30e74d533cc4fbbb.scope - libcontainer container 7384369db5b8160b228fdd4707125c4dbce024d95ca56f3d30e74d533cc4fbbb. Sep 9 05:12:57.527401 containerd[1507]: time="2025-09-09T05:12:57.527367815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cn85z,Uid:0aef82a5-2659-4973-bbea-be688146163b,Namespace:kube-system,Attempt:0,} returns sandbox id \"7384369db5b8160b228fdd4707125c4dbce024d95ca56f3d30e74d533cc4fbbb\"" Sep 9 05:12:57.528074 kubelet[2666]: E0909 05:12:57.528035 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:12:57.532016 containerd[1507]: time="2025-09-09T05:12:57.531981169Z" level=info msg="CreateContainer within sandbox \"7384369db5b8160b228fdd4707125c4dbce024d95ca56f3d30e74d533cc4fbbb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 05:12:57.538360 containerd[1507]: time="2025-09-09T05:12:57.538314071Z" level=info msg="Container e81a01a9f0ba2b103a31895039ae1971579dc7e578cf433992b178e1a9b2467e: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:12:57.543485 containerd[1507]: time="2025-09-09T05:12:57.543309551Z" level=info msg="CreateContainer within sandbox \"7384369db5b8160b228fdd4707125c4dbce024d95ca56f3d30e74d533cc4fbbb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e81a01a9f0ba2b103a31895039ae1971579dc7e578cf433992b178e1a9b2467e\"" Sep 9 05:12:57.544815 containerd[1507]: time="2025-09-09T05:12:57.544786255Z" level=info msg="StartContainer for \"e81a01a9f0ba2b103a31895039ae1971579dc7e578cf433992b178e1a9b2467e\"" Sep 9 05:12:57.545610 containerd[1507]: time="2025-09-09T05:12:57.545586268Z" level=info msg="connecting to shim e81a01a9f0ba2b103a31895039ae1971579dc7e578cf433992b178e1a9b2467e" address="unix:///run/containerd/s/bd5981a77e01ca4a102922dd66d3fee7c8e3e5b50593633f9d12cd2fe3128997" protocol=ttrpc version=3 Sep 9 05:12:57.563872 systemd[1]: Started cri-containerd-e81a01a9f0ba2b103a31895039ae1971579dc7e578cf433992b178e1a9b2467e.scope - libcontainer container e81a01a9f0ba2b103a31895039ae1971579dc7e578cf433992b178e1a9b2467e. Sep 9 05:12:57.587963 containerd[1507]: time="2025-09-09T05:12:57.587778545Z" level=info msg="StartContainer for \"e81a01a9f0ba2b103a31895039ae1971579dc7e578cf433992b178e1a9b2467e\" returns successfully" Sep 9 05:12:57.595868 systemd[1]: cri-containerd-e81a01a9f0ba2b103a31895039ae1971579dc7e578cf433992b178e1a9b2467e.scope: Deactivated successfully. Sep 9 05:12:57.596894 containerd[1507]: time="2025-09-09T05:12:57.596865091Z" level=info msg="received exit event container_id:\"e81a01a9f0ba2b103a31895039ae1971579dc7e578cf433992b178e1a9b2467e\" id:\"e81a01a9f0ba2b103a31895039ae1971579dc7e578cf433992b178e1a9b2467e\" pid:4526 exited_at:{seconds:1757394777 nanos:596621527}" Sep 9 05:12:57.597051 containerd[1507]: time="2025-09-09T05:12:57.597000053Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e81a01a9f0ba2b103a31895039ae1971579dc7e578cf433992b178e1a9b2467e\" id:\"e81a01a9f0ba2b103a31895039ae1971579dc7e578cf433992b178e1a9b2467e\" pid:4526 exited_at:{seconds:1757394777 nanos:596621527}" Sep 9 05:12:58.394479 kubelet[2666]: E0909 05:12:58.394404 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:12:58.412222 containerd[1507]: time="2025-09-09T05:12:58.412178894Z" level=info msg="CreateContainer within sandbox \"7384369db5b8160b228fdd4707125c4dbce024d95ca56f3d30e74d533cc4fbbb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 05:12:58.417165 containerd[1507]: time="2025-09-09T05:12:58.417037569Z" level=info msg="Container cfe6542db5fe273862f7bac2911ea81f6486face4fdab14175d5c99c09874df3: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:12:58.421865 containerd[1507]: time="2025-09-09T05:12:58.421832403Z" level=info msg="CreateContainer within sandbox \"7384369db5b8160b228fdd4707125c4dbce024d95ca56f3d30e74d533cc4fbbb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"cfe6542db5fe273862f7bac2911ea81f6486face4fdab14175d5c99c09874df3\"" Sep 9 05:12:58.422381 containerd[1507]: time="2025-09-09T05:12:58.422304091Z" level=info msg="StartContainer for \"cfe6542db5fe273862f7bac2911ea81f6486face4fdab14175d5c99c09874df3\"" Sep 9 05:12:58.423176 containerd[1507]: time="2025-09-09T05:12:58.423153264Z" level=info msg="connecting to shim cfe6542db5fe273862f7bac2911ea81f6486face4fdab14175d5c99c09874df3" address="unix:///run/containerd/s/bd5981a77e01ca4a102922dd66d3fee7c8e3e5b50593633f9d12cd2fe3128997" protocol=ttrpc version=3 Sep 9 05:12:58.442870 systemd[1]: Started cri-containerd-cfe6542db5fe273862f7bac2911ea81f6486face4fdab14175d5c99c09874df3.scope - libcontainer container cfe6542db5fe273862f7bac2911ea81f6486face4fdab14175d5c99c09874df3. Sep 9 05:12:58.467893 containerd[1507]: time="2025-09-09T05:12:58.467850155Z" level=info msg="StartContainer for \"cfe6542db5fe273862f7bac2911ea81f6486face4fdab14175d5c99c09874df3\" returns successfully" Sep 9 05:12:58.473402 systemd[1]: cri-containerd-cfe6542db5fe273862f7bac2911ea81f6486face4fdab14175d5c99c09874df3.scope: Deactivated successfully. Sep 9 05:12:58.474878 containerd[1507]: time="2025-09-09T05:12:58.474697381Z" level=info msg="received exit event container_id:\"cfe6542db5fe273862f7bac2911ea81f6486face4fdab14175d5c99c09874df3\" id:\"cfe6542db5fe273862f7bac2911ea81f6486face4fdab14175d5c99c09874df3\" pid:4576 exited_at:{seconds:1757394778 nanos:474485058}" Sep 9 05:12:58.474948 containerd[1507]: time="2025-09-09T05:12:58.474776462Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cfe6542db5fe273862f7bac2911ea81f6486face4fdab14175d5c99c09874df3\" id:\"cfe6542db5fe273862f7bac2911ea81f6486face4fdab14175d5c99c09874df3\" pid:4576 exited_at:{seconds:1757394778 nanos:474485058}" Sep 9 05:12:58.492425 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cfe6542db5fe273862f7bac2911ea81f6486face4fdab14175d5c99c09874df3-rootfs.mount: Deactivated successfully. Sep 9 05:12:59.398807 kubelet[2666]: E0909 05:12:59.398769 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:12:59.404447 containerd[1507]: time="2025-09-09T05:12:59.404411170Z" level=info msg="CreateContainer within sandbox \"7384369db5b8160b228fdd4707125c4dbce024d95ca56f3d30e74d533cc4fbbb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 05:12:59.412812 containerd[1507]: time="2025-09-09T05:12:59.411773279Z" level=info msg="Container 121b51f6dc418658380479bdde55f670edefced4dca67dc7c6ebe348720039d0: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:12:59.419149 containerd[1507]: time="2025-09-09T05:12:59.419110629Z" level=info msg="CreateContainer within sandbox \"7384369db5b8160b228fdd4707125c4dbce024d95ca56f3d30e74d533cc4fbbb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"121b51f6dc418658380479bdde55f670edefced4dca67dc7c6ebe348720039d0\"" Sep 9 05:12:59.419567 containerd[1507]: time="2025-09-09T05:12:59.419544555Z" level=info msg="StartContainer for \"121b51f6dc418658380479bdde55f670edefced4dca67dc7c6ebe348720039d0\"" Sep 9 05:12:59.421054 containerd[1507]: time="2025-09-09T05:12:59.421026217Z" level=info msg="connecting to shim 121b51f6dc418658380479bdde55f670edefced4dca67dc7c6ebe348720039d0" address="unix:///run/containerd/s/bd5981a77e01ca4a102922dd66d3fee7c8e3e5b50593633f9d12cd2fe3128997" protocol=ttrpc version=3 Sep 9 05:12:59.443887 systemd[1]: Started cri-containerd-121b51f6dc418658380479bdde55f670edefced4dca67dc7c6ebe348720039d0.scope - libcontainer container 121b51f6dc418658380479bdde55f670edefced4dca67dc7c6ebe348720039d0. Sep 9 05:12:59.478608 containerd[1507]: time="2025-09-09T05:12:59.478571435Z" level=info msg="StartContainer for \"121b51f6dc418658380479bdde55f670edefced4dca67dc7c6ebe348720039d0\" returns successfully" Sep 9 05:12:59.478855 systemd[1]: cri-containerd-121b51f6dc418658380479bdde55f670edefced4dca67dc7c6ebe348720039d0.scope: Deactivated successfully. Sep 9 05:12:59.483327 containerd[1507]: time="2025-09-09T05:12:59.482976380Z" level=info msg="received exit event container_id:\"121b51f6dc418658380479bdde55f670edefced4dca67dc7c6ebe348720039d0\" id:\"121b51f6dc418658380479bdde55f670edefced4dca67dc7c6ebe348720039d0\" pid:4619 exited_at:{seconds:1757394779 nanos:482403012}" Sep 9 05:12:59.483327 containerd[1507]: time="2025-09-09T05:12:59.483058101Z" level=info msg="TaskExit event in podsandbox handler container_id:\"121b51f6dc418658380479bdde55f670edefced4dca67dc7c6ebe348720039d0\" id:\"121b51f6dc418658380479bdde55f670edefced4dca67dc7c6ebe348720039d0\" pid:4619 exited_at:{seconds:1757394779 nanos:482403012}" Sep 9 05:12:59.503027 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-121b51f6dc418658380479bdde55f670edefced4dca67dc7c6ebe348720039d0-rootfs.mount: Deactivated successfully. Sep 9 05:13:00.171861 kubelet[2666]: E0909 05:13:00.171781 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:13:00.225180 kubelet[2666]: E0909 05:13:00.225146 2666 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 05:13:00.405312 kubelet[2666]: E0909 05:13:00.405282 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:13:00.409094 containerd[1507]: time="2025-09-09T05:13:00.408586307Z" level=info msg="CreateContainer within sandbox \"7384369db5b8160b228fdd4707125c4dbce024d95ca56f3d30e74d533cc4fbbb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 05:13:00.415407 containerd[1507]: time="2025-09-09T05:13:00.415369805Z" level=info msg="Container f3ba26d700958d4f6bd5ec1e08df0065f9ab81a288f1c41bbce6f3e29985726c: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:13:00.420363 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount722910199.mount: Deactivated successfully. Sep 9 05:13:00.425351 containerd[1507]: time="2025-09-09T05:13:00.425246627Z" level=info msg="CreateContainer within sandbox \"7384369db5b8160b228fdd4707125c4dbce024d95ca56f3d30e74d533cc4fbbb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f3ba26d700958d4f6bd5ec1e08df0065f9ab81a288f1c41bbce6f3e29985726c\"" Sep 9 05:13:00.426002 containerd[1507]: time="2025-09-09T05:13:00.425964357Z" level=info msg="StartContainer for \"f3ba26d700958d4f6bd5ec1e08df0065f9ab81a288f1c41bbce6f3e29985726c\"" Sep 9 05:13:00.427114 containerd[1507]: time="2025-09-09T05:13:00.426763008Z" level=info msg="connecting to shim f3ba26d700958d4f6bd5ec1e08df0065f9ab81a288f1c41bbce6f3e29985726c" address="unix:///run/containerd/s/bd5981a77e01ca4a102922dd66d3fee7c8e3e5b50593633f9d12cd2fe3128997" protocol=ttrpc version=3 Sep 9 05:13:00.446883 systemd[1]: Started cri-containerd-f3ba26d700958d4f6bd5ec1e08df0065f9ab81a288f1c41bbce6f3e29985726c.scope - libcontainer container f3ba26d700958d4f6bd5ec1e08df0065f9ab81a288f1c41bbce6f3e29985726c. Sep 9 05:13:00.470968 systemd[1]: cri-containerd-f3ba26d700958d4f6bd5ec1e08df0065f9ab81a288f1c41bbce6f3e29985726c.scope: Deactivated successfully. Sep 9 05:13:00.471948 containerd[1507]: time="2025-09-09T05:13:00.471912576Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f3ba26d700958d4f6bd5ec1e08df0065f9ab81a288f1c41bbce6f3e29985726c\" id:\"f3ba26d700958d4f6bd5ec1e08df0065f9ab81a288f1c41bbce6f3e29985726c\" pid:4658 exited_at:{seconds:1757394780 nanos:471162685}" Sep 9 05:13:00.472867 containerd[1507]: time="2025-09-09T05:13:00.472835309Z" level=info msg="received exit event container_id:\"f3ba26d700958d4f6bd5ec1e08df0065f9ab81a288f1c41bbce6f3e29985726c\" id:\"f3ba26d700958d4f6bd5ec1e08df0065f9ab81a288f1c41bbce6f3e29985726c\" pid:4658 exited_at:{seconds:1757394780 nanos:471162685}" Sep 9 05:13:00.480390 containerd[1507]: time="2025-09-09T05:13:00.480359577Z" level=info msg="StartContainer for \"f3ba26d700958d4f6bd5ec1e08df0065f9ab81a288f1c41bbce6f3e29985726c\" returns successfully" Sep 9 05:13:00.492645 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f3ba26d700958d4f6bd5ec1e08df0065f9ab81a288f1c41bbce6f3e29985726c-rootfs.mount: Deactivated successfully. Sep 9 05:13:01.172973 kubelet[2666]: E0909 05:13:01.172945 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:13:01.410198 kubelet[2666]: E0909 05:13:01.410147 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:13:01.416717 containerd[1507]: time="2025-09-09T05:13:01.415720421Z" level=info msg="CreateContainer within sandbox \"7384369db5b8160b228fdd4707125c4dbce024d95ca56f3d30e74d533cc4fbbb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 05:13:01.444932 containerd[1507]: time="2025-09-09T05:13:01.444835944Z" level=info msg="Container 91fe6a83f505fe12474257eb0e75037391eaad1a4464756fd09df4af43571875: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:13:01.446511 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1097888135.mount: Deactivated successfully. Sep 9 05:13:01.454390 containerd[1507]: time="2025-09-09T05:13:01.454290154Z" level=info msg="CreateContainer within sandbox \"7384369db5b8160b228fdd4707125c4dbce024d95ca56f3d30e74d533cc4fbbb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"91fe6a83f505fe12474257eb0e75037391eaad1a4464756fd09df4af43571875\"" Sep 9 05:13:01.455811 containerd[1507]: time="2025-09-09T05:13:01.454958044Z" level=info msg="StartContainer for \"91fe6a83f505fe12474257eb0e75037391eaad1a4464756fd09df4af43571875\"" Sep 9 05:13:01.455923 containerd[1507]: time="2025-09-09T05:13:01.455892737Z" level=info msg="connecting to shim 91fe6a83f505fe12474257eb0e75037391eaad1a4464756fd09df4af43571875" address="unix:///run/containerd/s/bd5981a77e01ca4a102922dd66d3fee7c8e3e5b50593633f9d12cd2fe3128997" protocol=ttrpc version=3 Sep 9 05:13:01.485899 systemd[1]: Started cri-containerd-91fe6a83f505fe12474257eb0e75037391eaad1a4464756fd09df4af43571875.scope - libcontainer container 91fe6a83f505fe12474257eb0e75037391eaad1a4464756fd09df4af43571875. Sep 9 05:13:01.515457 containerd[1507]: time="2025-09-09T05:13:01.515422639Z" level=info msg="StartContainer for \"91fe6a83f505fe12474257eb0e75037391eaad1a4464756fd09df4af43571875\" returns successfully" Sep 9 05:13:01.567946 containerd[1507]: time="2025-09-09T05:13:01.567911645Z" level=info msg="TaskExit event in podsandbox handler container_id:\"91fe6a83f505fe12474257eb0e75037391eaad1a4464756fd09df4af43571875\" id:\"1ad6912417bf4a6d00a13feee0c02065887fafb8bc82df9f03d2ce5436155c88\" pid:4725 exited_at:{seconds:1757394781 nanos:567141634}" Sep 9 05:13:01.781723 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 9 05:13:02.415747 kubelet[2666]: E0909 05:13:02.415501 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:13:02.430903 kubelet[2666]: I0909 05:13:02.430855 2666 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cn85z" podStartSLOduration=5.430841711 podStartE2EDuration="5.430841711s" podCreationTimestamp="2025-09-09 05:12:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:13:02.43000906 +0000 UTC m=+77.348818820" watchObservedRunningTime="2025-09-09 05:13:02.430841711 +0000 UTC m=+77.349651351" Sep 9 05:13:03.465796 kubelet[2666]: E0909 05:13:03.465747 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:13:03.811292 containerd[1507]: time="2025-09-09T05:13:03.811221997Z" level=info msg="TaskExit event in podsandbox handler container_id:\"91fe6a83f505fe12474257eb0e75037391eaad1a4464756fd09df4af43571875\" id:\"6aadd75a1df2f84aa85a666a4a43796e0e163b950aaabfa1e84850af8e599310\" pid:5002 exit_status:1 exited_at:{seconds:1757394783 nanos:810863592}" Sep 9 05:13:03.825719 kubelet[2666]: E0909 05:13:03.824939 2666 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:51340->127.0.0.1:43283: write tcp 127.0.0.1:51340->127.0.0.1:43283: write: broken pipe Sep 9 05:13:04.722831 systemd-networkd[1454]: lxc_health: Link UP Sep 9 05:13:04.729004 systemd-networkd[1454]: lxc_health: Gained carrier Sep 9 05:13:05.467142 kubelet[2666]: E0909 05:13:05.467065 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:13:05.942457 containerd[1507]: time="2025-09-09T05:13:05.942191079Z" level=info msg="TaskExit event in podsandbox handler container_id:\"91fe6a83f505fe12474257eb0e75037391eaad1a4464756fd09df4af43571875\" id:\"a553aaa1fd550b7c2c490f17d8b8c8ad3e514d799cf4c0d3342ace04106971ca\" pid:5261 exited_at:{seconds:1757394785 nanos:941216428}" Sep 9 05:13:06.056885 systemd-networkd[1454]: lxc_health: Gained IPv6LL Sep 9 05:13:06.425743 kubelet[2666]: E0909 05:13:06.425224 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:13:07.426775 kubelet[2666]: E0909 05:13:07.426691 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:13:08.057941 containerd[1507]: time="2025-09-09T05:13:08.057891476Z" level=info msg="TaskExit event in podsandbox handler container_id:\"91fe6a83f505fe12474257eb0e75037391eaad1a4464756fd09df4af43571875\" id:\"0be9197e7617c4749e334858448bd5eaa4cb5383ad3d36572772a80db1404509\" pid:5288 exited_at:{seconds:1757394788 nanos:57561792}" Sep 9 05:13:08.060220 kubelet[2666]: E0909 05:13:08.060145 2666 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:51350->127.0.0.1:43283: write tcp 127.0.0.1:51350->127.0.0.1:43283: write: broken pipe Sep 9 05:13:10.165457 containerd[1507]: time="2025-09-09T05:13:10.165279356Z" level=info msg="TaskExit event in podsandbox handler container_id:\"91fe6a83f505fe12474257eb0e75037391eaad1a4464756fd09df4af43571875\" id:\"20fec5c3df841c78bf8d041a81044f83fe7ed7427ac9f1acbcd6b556c0682c06\" pid:5319 exited_at:{seconds:1757394790 nanos:164780951}" Sep 9 05:13:10.168810 kubelet[2666]: E0909 05:13:10.168779 2666 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:51352->127.0.0.1:43283: write tcp 127.0.0.1:51352->127.0.0.1:43283: write: broken pipe Sep 9 05:13:10.177317 sshd[4460]: Connection closed by 10.0.0.1 port 34060 Sep 9 05:13:10.177750 sshd-session[4453]: pam_unix(sshd:session): session closed for user core Sep 9 05:13:10.181027 systemd[1]: sshd@24-10.0.0.133:22-10.0.0.1:34060.service: Deactivated successfully. Sep 9 05:13:10.182673 systemd[1]: session-25.scope: Deactivated successfully. Sep 9 05:13:10.183569 systemd-logind[1484]: Session 25 logged out. Waiting for processes to exit. Sep 9 05:13:10.184796 systemd-logind[1484]: Removed session 25.