Sep 12 17:03:11.784830 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 12 17:03:11.784851 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Fri Sep 12 15:37:01 -00 2025 Sep 12 17:03:11.784860 kernel: KASLR enabled Sep 12 17:03:11.784866 kernel: efi: EFI v2.7 by EDK II Sep 12 17:03:11.784871 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Sep 12 17:03:11.784877 kernel: random: crng init done Sep 12 17:03:11.784884 kernel: secureboot: Secure boot disabled Sep 12 17:03:11.784889 kernel: ACPI: Early table checksum verification disabled Sep 12 17:03:11.784895 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Sep 12 17:03:11.784902 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 12 17:03:11.784908 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:03:11.784914 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:03:11.784919 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:03:11.784926 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:03:11.784933 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:03:11.784940 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:03:11.784946 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:03:11.784952 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:03:11.784967 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:03:11.784973 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 12 17:03:11.784979 kernel: ACPI: Use ACPI SPCR as default console: No Sep 12 17:03:11.784985 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 12 17:03:11.784991 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Sep 12 17:03:11.784997 kernel: Zone ranges: Sep 12 17:03:11.785003 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 12 17:03:11.785010 kernel: DMA32 empty Sep 12 17:03:11.785016 kernel: Normal empty Sep 12 17:03:11.785022 kernel: Device empty Sep 12 17:03:11.785028 kernel: Movable zone start for each node Sep 12 17:03:11.785034 kernel: Early memory node ranges Sep 12 17:03:11.785040 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Sep 12 17:03:11.785045 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Sep 12 17:03:11.785051 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Sep 12 17:03:11.785057 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Sep 12 17:03:11.785063 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Sep 12 17:03:11.785069 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Sep 12 17:03:11.785075 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Sep 12 17:03:11.785082 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Sep 12 17:03:11.785088 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Sep 12 17:03:11.785094 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 12 17:03:11.785102 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 12 17:03:11.785108 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 12 17:03:11.785115 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 12 17:03:11.785123 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 12 17:03:11.785129 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 12 17:03:11.785136 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Sep 12 17:03:11.785142 kernel: psci: probing for conduit method from ACPI. Sep 12 17:03:11.785148 kernel: psci: PSCIv1.1 detected in firmware. Sep 12 17:03:11.785155 kernel: psci: Using standard PSCI v0.2 function IDs Sep 12 17:03:11.785161 kernel: psci: Trusted OS migration not required Sep 12 17:03:11.785167 kernel: psci: SMC Calling Convention v1.1 Sep 12 17:03:11.785173 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 12 17:03:11.785180 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 12 17:03:11.785188 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 12 17:03:11.785194 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 12 17:03:11.785200 kernel: Detected PIPT I-cache on CPU0 Sep 12 17:03:11.785207 kernel: CPU features: detected: GIC system register CPU interface Sep 12 17:03:11.785213 kernel: CPU features: detected: Spectre-v4 Sep 12 17:03:11.785219 kernel: CPU features: detected: Spectre-BHB Sep 12 17:03:11.785225 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 12 17:03:11.785232 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 12 17:03:11.785238 kernel: CPU features: detected: ARM erratum 1418040 Sep 12 17:03:11.785245 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 12 17:03:11.785251 kernel: alternatives: applying boot alternatives Sep 12 17:03:11.785258 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9b01894f6bb04aff3ec9b8554b3ae56a087d51961f1a01981bc4d4f54ccefc09 Sep 12 17:03:11.785267 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:03:11.785273 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 17:03:11.785280 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 17:03:11.785286 kernel: Fallback order for Node 0: 0 Sep 12 17:03:11.785292 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 12 17:03:11.785298 kernel: Policy zone: DMA Sep 12 17:03:11.785304 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:03:11.785311 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 12 17:03:11.785317 kernel: software IO TLB: area num 4. Sep 12 17:03:11.785323 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 12 17:03:11.785330 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Sep 12 17:03:11.785337 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 12 17:03:11.785344 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:03:11.785351 kernel: rcu: RCU event tracing is enabled. Sep 12 17:03:11.785357 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 12 17:03:11.785364 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:03:11.785370 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:03:11.785376 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:03:11.785383 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 12 17:03:11.785389 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 17:03:11.785396 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 17:03:11.785402 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 12 17:03:11.785410 kernel: GICv3: 256 SPIs implemented Sep 12 17:03:11.785416 kernel: GICv3: 0 Extended SPIs implemented Sep 12 17:03:11.785423 kernel: Root IRQ handler: gic_handle_irq Sep 12 17:03:11.785429 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 12 17:03:11.785435 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 12 17:03:11.785441 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 12 17:03:11.785448 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 12 17:03:11.785454 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Sep 12 17:03:11.785460 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Sep 12 17:03:11.785467 kernel: GICv3: using LPI property table @0x0000000040130000 Sep 12 17:03:11.785473 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Sep 12 17:03:11.785480 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:03:11.785487 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 17:03:11.785494 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 12 17:03:11.785500 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 12 17:03:11.785506 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 12 17:03:11.785513 kernel: arm-pv: using stolen time PV Sep 12 17:03:11.785519 kernel: Console: colour dummy device 80x25 Sep 12 17:03:11.785526 kernel: ACPI: Core revision 20240827 Sep 12 17:03:11.785532 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 12 17:03:11.785539 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:03:11.785545 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 12 17:03:11.785553 kernel: landlock: Up and running. Sep 12 17:03:11.785560 kernel: SELinux: Initializing. Sep 12 17:03:11.785566 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:03:11.785573 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:03:11.785579 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:03:11.785586 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:03:11.785592 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 12 17:03:11.785599 kernel: Remapping and enabling EFI services. Sep 12 17:03:11.785605 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:03:11.785618 kernel: Detected PIPT I-cache on CPU1 Sep 12 17:03:11.785624 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 12 17:03:11.785631 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Sep 12 17:03:11.785779 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 17:03:11.785790 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 12 17:03:11.785797 kernel: Detected PIPT I-cache on CPU2 Sep 12 17:03:11.785805 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 12 17:03:11.785812 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Sep 12 17:03:11.785823 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 17:03:11.785829 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 12 17:03:11.785837 kernel: Detected PIPT I-cache on CPU3 Sep 12 17:03:11.785843 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 12 17:03:11.785851 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Sep 12 17:03:11.785857 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 17:03:11.785864 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 12 17:03:11.785871 kernel: smp: Brought up 1 node, 4 CPUs Sep 12 17:03:11.785878 kernel: SMP: Total of 4 processors activated. Sep 12 17:03:11.785887 kernel: CPU: All CPU(s) started at EL1 Sep 12 17:03:11.785894 kernel: CPU features: detected: 32-bit EL0 Support Sep 12 17:03:11.785901 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 12 17:03:11.785908 kernel: CPU features: detected: Common not Private translations Sep 12 17:03:11.785915 kernel: CPU features: detected: CRC32 instructions Sep 12 17:03:11.785922 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 12 17:03:11.785929 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 12 17:03:11.785936 kernel: CPU features: detected: LSE atomic instructions Sep 12 17:03:11.785943 kernel: CPU features: detected: Privileged Access Never Sep 12 17:03:11.785952 kernel: CPU features: detected: RAS Extension Support Sep 12 17:03:11.785967 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 12 17:03:11.785974 kernel: alternatives: applying system-wide alternatives Sep 12 17:03:11.785981 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 12 17:03:11.785989 kernel: Memory: 2424544K/2572288K available (11136K kernel code, 2440K rwdata, 9068K rodata, 38912K init, 1038K bss, 125408K reserved, 16384K cma-reserved) Sep 12 17:03:11.785997 kernel: devtmpfs: initialized Sep 12 17:03:11.786004 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:03:11.786010 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 12 17:03:11.786017 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 12 17:03:11.786026 kernel: 0 pages in range for non-PLT usage Sep 12 17:03:11.786033 kernel: 508576 pages in range for PLT usage Sep 12 17:03:11.786040 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:03:11.786047 kernel: SMBIOS 3.0.0 present. Sep 12 17:03:11.786054 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 12 17:03:11.786061 kernel: DMI: Memory slots populated: 1/1 Sep 12 17:03:11.786068 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:03:11.786075 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 12 17:03:11.786082 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 12 17:03:11.786090 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 12 17:03:11.786098 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:03:11.786105 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Sep 12 17:03:11.786112 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:03:11.786119 kernel: cpuidle: using governor menu Sep 12 17:03:11.786126 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 12 17:03:11.786133 kernel: ASID allocator initialised with 32768 entries Sep 12 17:03:11.786141 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:03:11.786148 kernel: Serial: AMBA PL011 UART driver Sep 12 17:03:11.786156 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 17:03:11.786163 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 17:03:11.786170 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 12 17:03:11.786177 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 12 17:03:11.786184 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:03:11.786191 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:03:11.786198 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 12 17:03:11.786205 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 12 17:03:11.786212 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:03:11.786220 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:03:11.786227 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:03:11.786234 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 17:03:11.786241 kernel: ACPI: Interpreter enabled Sep 12 17:03:11.786248 kernel: ACPI: Using GIC for interrupt routing Sep 12 17:03:11.786255 kernel: ACPI: MCFG table detected, 1 entries Sep 12 17:03:11.786262 kernel: ACPI: CPU0 has been hot-added Sep 12 17:03:11.786269 kernel: ACPI: CPU1 has been hot-added Sep 12 17:03:11.786275 kernel: ACPI: CPU2 has been hot-added Sep 12 17:03:11.786283 kernel: ACPI: CPU3 has been hot-added Sep 12 17:03:11.786291 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 12 17:03:11.786298 kernel: printk: legacy console [ttyAMA0] enabled Sep 12 17:03:11.786309 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 17:03:11.786464 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 17:03:11.786531 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 12 17:03:11.786589 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 12 17:03:11.786662 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 12 17:03:11.786725 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 12 17:03:11.786734 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 12 17:03:11.786741 kernel: PCI host bridge to bus 0000:00 Sep 12 17:03:11.786811 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 12 17:03:11.786866 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 12 17:03:11.786918 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 12 17:03:11.786984 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 17:03:11.787074 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 12 17:03:11.787144 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 12 17:03:11.787205 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 12 17:03:11.787276 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 12 17:03:11.787336 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 12 17:03:11.787395 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 12 17:03:11.787460 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 12 17:03:11.787521 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 12 17:03:11.787574 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 12 17:03:11.787629 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 12 17:03:11.787709 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 12 17:03:11.787719 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 12 17:03:11.787726 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 12 17:03:11.787733 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 12 17:03:11.787743 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 12 17:03:11.787750 kernel: iommu: Default domain type: Translated Sep 12 17:03:11.787757 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 12 17:03:11.787764 kernel: efivars: Registered efivars operations Sep 12 17:03:11.787771 kernel: vgaarb: loaded Sep 12 17:03:11.787777 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 12 17:03:11.787784 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:03:11.787792 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:03:11.787799 kernel: pnp: PnP ACPI init Sep 12 17:03:11.787869 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 12 17:03:11.787879 kernel: pnp: PnP ACPI: found 1 devices Sep 12 17:03:11.787886 kernel: NET: Registered PF_INET protocol family Sep 12 17:03:11.787893 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 17:03:11.787900 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 17:03:11.787907 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:03:11.787914 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 17:03:11.787921 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 17:03:11.787930 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 17:03:11.787937 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:03:11.787944 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:03:11.787952 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:03:11.787968 kernel: PCI: CLS 0 bytes, default 64 Sep 12 17:03:11.787976 kernel: kvm [1]: HYP mode not available Sep 12 17:03:11.787983 kernel: Initialise system trusted keyrings Sep 12 17:03:11.787990 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 17:03:11.787997 kernel: Key type asymmetric registered Sep 12 17:03:11.788005 kernel: Asymmetric key parser 'x509' registered Sep 12 17:03:11.788013 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 12 17:03:11.788020 kernel: io scheduler mq-deadline registered Sep 12 17:03:11.788028 kernel: io scheduler kyber registered Sep 12 17:03:11.788035 kernel: io scheduler bfq registered Sep 12 17:03:11.788043 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 12 17:03:11.788049 kernel: ACPI: button: Power Button [PWRB] Sep 12 17:03:11.788057 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 12 17:03:11.788124 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 12 17:03:11.788135 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:03:11.788142 kernel: thunder_xcv, ver 1.0 Sep 12 17:03:11.788149 kernel: thunder_bgx, ver 1.0 Sep 12 17:03:11.788156 kernel: nicpf, ver 1.0 Sep 12 17:03:11.788163 kernel: nicvf, ver 1.0 Sep 12 17:03:11.788233 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 12 17:03:11.788289 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-12T17:03:11 UTC (1757696591) Sep 12 17:03:11.788298 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 12 17:03:11.788306 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 12 17:03:11.788314 kernel: watchdog: NMI not fully supported Sep 12 17:03:11.788321 kernel: watchdog: Hard watchdog permanently disabled Sep 12 17:03:11.788328 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:03:11.788335 kernel: Segment Routing with IPv6 Sep 12 17:03:11.788342 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:03:11.788349 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:03:11.788355 kernel: Key type dns_resolver registered Sep 12 17:03:11.788363 kernel: registered taskstats version 1 Sep 12 17:03:11.788369 kernel: Loading compiled-in X.509 certificates Sep 12 17:03:11.788378 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: 7675c1947f324bc6524fdc1ee0f8f5f343acfea7' Sep 12 17:03:11.788385 kernel: Demotion targets for Node 0: null Sep 12 17:03:11.788412 kernel: Key type .fscrypt registered Sep 12 17:03:11.788419 kernel: Key type fscrypt-provisioning registered Sep 12 17:03:11.788426 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 17:03:11.788433 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:03:11.788440 kernel: ima: No architecture policies found Sep 12 17:03:11.788447 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 12 17:03:11.788455 kernel: clk: Disabling unused clocks Sep 12 17:03:11.788462 kernel: PM: genpd: Disabling unused power domains Sep 12 17:03:11.788469 kernel: Warning: unable to open an initial console. Sep 12 17:03:11.788476 kernel: Freeing unused kernel memory: 38912K Sep 12 17:03:11.788483 kernel: Run /init as init process Sep 12 17:03:11.788490 kernel: with arguments: Sep 12 17:03:11.788497 kernel: /init Sep 12 17:03:11.788503 kernel: with environment: Sep 12 17:03:11.788510 kernel: HOME=/ Sep 12 17:03:11.788517 kernel: TERM=linux Sep 12 17:03:11.788525 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:03:11.788533 systemd[1]: Successfully made /usr/ read-only. Sep 12 17:03:11.788543 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 17:03:11.788551 systemd[1]: Detected virtualization kvm. Sep 12 17:03:11.788559 systemd[1]: Detected architecture arm64. Sep 12 17:03:11.788566 systemd[1]: Running in initrd. Sep 12 17:03:11.788573 systemd[1]: No hostname configured, using default hostname. Sep 12 17:03:11.788582 systemd[1]: Hostname set to . Sep 12 17:03:11.788589 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:03:11.788596 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:03:11.788604 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:03:11.788611 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:03:11.788619 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:03:11.788626 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:03:11.788634 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:03:11.788657 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:03:11.788666 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:03:11.788673 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:03:11.788681 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:03:11.788688 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:03:11.788696 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:03:11.788703 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:03:11.788712 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:03:11.788719 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:03:11.788727 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:03:11.788734 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:03:11.788741 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:03:11.788749 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 12 17:03:11.788757 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:03:11.788764 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:03:11.788774 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:03:11.788781 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:03:11.788789 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:03:11.788796 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:03:11.788804 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:03:11.788812 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 12 17:03:11.788819 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:03:11.788826 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:03:11.788834 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:03:11.788843 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:03:11.788851 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:03:11.788858 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:03:11.788866 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:03:11.788875 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:03:11.788899 systemd-journald[242]: Collecting audit messages is disabled. Sep 12 17:03:11.788919 systemd-journald[242]: Journal started Sep 12 17:03:11.788939 systemd-journald[242]: Runtime Journal (/run/log/journal/1186e62b668e4603b4f9319652b5e4be) is 6M, max 48.5M, 42.4M free. Sep 12 17:03:11.780953 systemd-modules-load[245]: Inserted module 'overlay' Sep 12 17:03:11.796613 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:03:11.798656 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:03:11.798674 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:03:11.800537 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:03:11.801803 kernel: Bridge firewalling registered Sep 12 17:03:11.800880 systemd-modules-load[245]: Inserted module 'br_netfilter' Sep 12 17:03:11.803267 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:03:11.807188 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:03:11.809041 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:03:11.810773 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:03:11.827694 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:03:11.835430 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:03:11.836412 systemd-tmpfiles[271]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 12 17:03:11.839454 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:03:11.840904 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:03:11.844762 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:03:11.848244 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:03:11.854454 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:03:11.878007 dracut-cmdline[287]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9b01894f6bb04aff3ec9b8554b3ae56a087d51961f1a01981bc4d4f54ccefc09 Sep 12 17:03:11.879284 systemd-resolved[283]: Positive Trust Anchors: Sep 12 17:03:11.879295 systemd-resolved[283]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:03:11.879327 systemd-resolved[283]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:03:11.884221 systemd-resolved[283]: Defaulting to hostname 'linux'. Sep 12 17:03:11.885190 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:03:11.886872 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:03:11.955669 kernel: SCSI subsystem initialized Sep 12 17:03:11.960661 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:03:11.970672 kernel: iscsi: registered transport (tcp) Sep 12 17:03:11.983667 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:03:11.983701 kernel: QLogic iSCSI HBA Driver Sep 12 17:03:12.000363 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:03:12.021280 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:03:12.023387 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:03:12.071688 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:03:12.074804 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:03:12.147687 kernel: raid6: neonx8 gen() 15774 MB/s Sep 12 17:03:12.164668 kernel: raid6: neonx4 gen() 15703 MB/s Sep 12 17:03:12.181707 kernel: raid6: neonx2 gen() 13226 MB/s Sep 12 17:03:12.198693 kernel: raid6: neonx1 gen() 10445 MB/s Sep 12 17:03:12.215660 kernel: raid6: int64x8 gen() 6883 MB/s Sep 12 17:03:12.232664 kernel: raid6: int64x4 gen() 7334 MB/s Sep 12 17:03:12.249658 kernel: raid6: int64x2 gen() 6098 MB/s Sep 12 17:03:12.266692 kernel: raid6: int64x1 gen() 5040 MB/s Sep 12 17:03:12.266753 kernel: raid6: using algorithm neonx8 gen() 15774 MB/s Sep 12 17:03:12.283705 kernel: raid6: .... xor() 12018 MB/s, rmw enabled Sep 12 17:03:12.283777 kernel: raid6: using neon recovery algorithm Sep 12 17:03:12.288872 kernel: xor: measuring software checksum speed Sep 12 17:03:12.288917 kernel: 8regs : 21466 MB/sec Sep 12 17:03:12.289977 kernel: 32regs : 20988 MB/sec Sep 12 17:03:12.290004 kernel: arm64_neon : 28089 MB/sec Sep 12 17:03:12.290021 kernel: xor: using function: arm64_neon (28089 MB/sec) Sep 12 17:03:12.343683 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:03:12.349879 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:03:12.352248 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:03:12.380359 systemd-udevd[496]: Using default interface naming scheme 'v255'. Sep 12 17:03:12.384579 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:03:12.386523 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:03:12.410802 kernel: hrtimer: interrupt took 12053640 ns Sep 12 17:03:12.430287 dracut-pre-trigger[498]: rd.md=0: removing MD RAID activation Sep 12 17:03:12.457667 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:03:12.460747 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:03:12.521234 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:03:12.523808 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:03:12.571669 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 12 17:03:12.577395 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:03:12.582745 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 12 17:03:12.585599 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 17:03:12.585615 kernel: GPT:9289727 != 19775487 Sep 12 17:03:12.585623 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 17:03:12.585654 kernel: GPT:9289727 != 19775487 Sep 12 17:03:12.577801 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:03:12.588284 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 17:03:12.588305 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:03:12.582840 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:03:12.589064 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:03:12.609987 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 12 17:03:12.617673 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:03:12.624170 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:03:12.638121 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 12 17:03:12.646445 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 17:03:12.653232 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 12 17:03:12.654283 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 12 17:03:12.656391 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:03:12.658925 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:03:12.660654 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:03:12.663034 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:03:12.664592 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:03:12.687438 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:03:12.690394 disk-uuid[591]: Primary Header is updated. Sep 12 17:03:12.690394 disk-uuid[591]: Secondary Entries is updated. Sep 12 17:03:12.690394 disk-uuid[591]: Secondary Header is updated. Sep 12 17:03:12.695670 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:03:12.697679 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:03:13.699679 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:03:13.700387 disk-uuid[599]: The operation has completed successfully. Sep 12 17:03:13.727625 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:03:13.727766 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:03:13.768760 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:03:13.784791 sh[613]: Success Sep 12 17:03:13.798756 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:03:13.798800 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:03:13.799890 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 12 17:03:13.811693 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 12 17:03:13.854525 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:03:13.857382 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:03:13.874281 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:03:13.891690 kernel: BTRFS: device fsid 752cb955-bdfa-486a-ad02-b54d5e61d194 devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (625) Sep 12 17:03:13.891738 kernel: BTRFS info (device dm-0): first mount of filesystem 752cb955-bdfa-486a-ad02-b54d5e61d194 Sep 12 17:03:13.892670 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:03:13.902919 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:03:13.902981 kernel: BTRFS info (device dm-0): enabling free space tree Sep 12 17:03:13.904488 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:03:13.905652 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 12 17:03:13.906627 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:03:13.907411 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:03:13.911028 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:03:13.929977 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (658) Sep 12 17:03:13.930036 kernel: BTRFS info (device vda6): first mount of filesystem 5f4a7913-42f7-487c-8331-8ab180fe9df7 Sep 12 17:03:13.931748 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:03:13.936907 kernel: BTRFS info (device vda6): turning on async discard Sep 12 17:03:13.936966 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 17:03:13.943248 kernel: BTRFS info (device vda6): last unmount of filesystem 5f4a7913-42f7-487c-8331-8ab180fe9df7 Sep 12 17:03:13.948304 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:03:13.950362 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:03:14.015694 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:03:14.019785 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:03:14.056778 systemd-networkd[800]: lo: Link UP Sep 12 17:03:14.056789 systemd-networkd[800]: lo: Gained carrier Sep 12 17:03:14.057517 systemd-networkd[800]: Enumeration completed Sep 12 17:03:14.057762 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:03:14.058303 systemd-networkd[800]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:03:14.058308 systemd-networkd[800]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:03:14.059359 systemd-networkd[800]: eth0: Link UP Sep 12 17:03:14.060064 systemd[1]: Reached target network.target - Network. Sep 12 17:03:14.060491 systemd-networkd[800]: eth0: Gained carrier Sep 12 17:03:14.060503 systemd-networkd[800]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:03:14.076590 ignition[711]: Ignition 2.21.0 Sep 12 17:03:14.076606 ignition[711]: Stage: fetch-offline Sep 12 17:03:14.076634 ignition[711]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:03:14.076656 ignition[711]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:03:14.076822 ignition[711]: parsed url from cmdline: "" Sep 12 17:03:14.076826 ignition[711]: no config URL provided Sep 12 17:03:14.076831 ignition[711]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:03:14.076837 ignition[711]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:03:14.076857 ignition[711]: op(1): [started] loading QEMU firmware config module Sep 12 17:03:14.082704 systemd-networkd[800]: eth0: DHCPv4 address 10.0.0.14/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 17:03:14.076861 ignition[711]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 12 17:03:14.087668 ignition[711]: op(1): [finished] loading QEMU firmware config module Sep 12 17:03:14.126610 ignition[711]: parsing config with SHA512: d824b263d426a06cf8081974a4c381afc8c659af9ab8032bfee840c9bc988eff01d9c7a0737cad3bdc9b4b2351f84c5ff964efe43789a05bd437d5f7194e2719 Sep 12 17:03:14.130871 unknown[711]: fetched base config from "system" Sep 12 17:03:14.130888 unknown[711]: fetched user config from "qemu" Sep 12 17:03:14.131290 ignition[711]: fetch-offline: fetch-offline passed Sep 12 17:03:14.131346 ignition[711]: Ignition finished successfully Sep 12 17:03:14.135025 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:03:14.136677 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 12 17:03:14.137420 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:03:14.176092 ignition[813]: Ignition 2.21.0 Sep 12 17:03:14.176106 ignition[813]: Stage: kargs Sep 12 17:03:14.176257 ignition[813]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:03:14.176266 ignition[813]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:03:14.178436 ignition[813]: kargs: kargs passed Sep 12 17:03:14.178491 ignition[813]: Ignition finished successfully Sep 12 17:03:14.182475 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:03:14.184337 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:03:14.225655 ignition[822]: Ignition 2.21.0 Sep 12 17:03:14.225668 ignition[822]: Stage: disks Sep 12 17:03:14.225857 ignition[822]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:03:14.225866 ignition[822]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:03:14.227311 ignition[822]: disks: disks passed Sep 12 17:03:14.228894 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:03:14.227366 ignition[822]: Ignition finished successfully Sep 12 17:03:14.230172 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:03:14.232732 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:03:14.234038 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:03:14.235373 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:03:14.236897 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:03:14.239033 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:03:14.277847 systemd-resolved[283]: Detected conflict on linux IN A 10.0.0.14 Sep 12 17:03:14.277858 systemd-resolved[283]: Hostname conflict, changing published hostname from 'linux' to 'linux5'. Sep 12 17:03:14.280727 systemd-fsck[832]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 12 17:03:14.283439 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:03:14.285510 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:03:14.354665 kernel: EXT4-fs (vda9): mounted filesystem c902100c-52b7-422c-84ac-d834d4db2717 r/w with ordered data mode. Quota mode: none. Sep 12 17:03:14.354997 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:03:14.356200 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:03:14.359293 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:03:14.361508 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:03:14.362451 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 17:03:14.362490 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:03:14.362511 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:03:14.370057 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:03:14.371971 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:03:14.378396 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (841) Sep 12 17:03:14.378431 kernel: BTRFS info (device vda6): first mount of filesystem 5f4a7913-42f7-487c-8331-8ab180fe9df7 Sep 12 17:03:14.378441 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:03:14.382008 kernel: BTRFS info (device vda6): turning on async discard Sep 12 17:03:14.382053 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 17:03:14.385842 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:03:14.407845 initrd-setup-root[865]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:03:14.411689 initrd-setup-root[872]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:03:14.415794 initrd-setup-root[879]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:03:14.419436 initrd-setup-root[886]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:03:14.489270 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:03:14.491588 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:03:14.493141 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:03:14.514670 kernel: BTRFS info (device vda6): last unmount of filesystem 5f4a7913-42f7-487c-8331-8ab180fe9df7 Sep 12 17:03:14.528784 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:03:14.540456 ignition[955]: INFO : Ignition 2.21.0 Sep 12 17:03:14.540456 ignition[955]: INFO : Stage: mount Sep 12 17:03:14.542158 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:03:14.542158 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:03:14.542158 ignition[955]: INFO : mount: mount passed Sep 12 17:03:14.542158 ignition[955]: INFO : Ignition finished successfully Sep 12 17:03:14.542932 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:03:14.544850 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:03:14.891004 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:03:14.896884 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:03:14.925656 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (968) Sep 12 17:03:14.927661 kernel: BTRFS info (device vda6): first mount of filesystem 5f4a7913-42f7-487c-8331-8ab180fe9df7 Sep 12 17:03:14.927699 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:03:14.929956 kernel: BTRFS info (device vda6): turning on async discard Sep 12 17:03:14.930007 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 17:03:14.931489 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:03:14.966517 ignition[985]: INFO : Ignition 2.21.0 Sep 12 17:03:14.966517 ignition[985]: INFO : Stage: files Sep 12 17:03:14.968516 ignition[985]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:03:14.968516 ignition[985]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:03:14.968516 ignition[985]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:03:14.971656 ignition[985]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:03:14.971656 ignition[985]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:03:14.975008 ignition[985]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:03:14.976395 ignition[985]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:03:14.977865 unknown[985]: wrote ssh authorized keys file for user: core Sep 12 17:03:14.978966 ignition[985]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:03:14.980304 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 12 17:03:14.981819 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 12 17:03:15.040132 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 17:03:15.325938 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 12 17:03:15.325938 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 17:03:15.330158 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 12 17:03:15.536467 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 12 17:03:15.583904 systemd-networkd[800]: eth0: Gained IPv6LL Sep 12 17:03:15.622558 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 17:03:15.624011 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:03:15.624011 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:03:15.624011 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:03:15.624011 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:03:15.624011 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:03:15.624011 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:03:15.624011 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:03:15.624011 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:03:15.635078 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:03:15.635078 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:03:15.635078 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 17:03:15.635078 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 17:03:15.635078 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 17:03:15.635078 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 12 17:03:16.077187 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 12 17:03:16.374245 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 17:03:16.374245 ignition[985]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 12 17:03:16.379428 ignition[985]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:03:16.382070 ignition[985]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:03:16.382070 ignition[985]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 12 17:03:16.382070 ignition[985]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 12 17:03:16.382070 ignition[985]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 17:03:16.382070 ignition[985]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 17:03:16.382070 ignition[985]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 12 17:03:16.382070 ignition[985]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 12 17:03:16.403651 ignition[985]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 17:03:16.407386 ignition[985]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 17:03:16.408793 ignition[985]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 12 17:03:16.408793 ignition[985]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:03:16.408793 ignition[985]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:03:16.408793 ignition[985]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:03:16.408793 ignition[985]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:03:16.408793 ignition[985]: INFO : files: files passed Sep 12 17:03:16.408793 ignition[985]: INFO : Ignition finished successfully Sep 12 17:03:16.409927 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:03:16.413091 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:03:16.416689 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:03:16.427825 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:03:16.427925 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:03:16.430481 initrd-setup-root-after-ignition[1014]: grep: /sysroot/oem/oem-release: No such file or directory Sep 12 17:03:16.432805 initrd-setup-root-after-ignition[1016]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:03:16.432805 initrd-setup-root-after-ignition[1016]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:03:16.435779 initrd-setup-root-after-ignition[1020]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:03:16.437145 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:03:16.438561 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:03:16.441148 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:03:16.502455 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:03:16.502586 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:03:16.504919 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:03:16.506026 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:03:16.507504 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:03:16.508428 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:03:16.537553 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:03:16.540706 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:03:16.562418 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:03:16.563451 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:03:16.565140 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:03:16.566657 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:03:16.566789 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:03:16.569061 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:03:16.570069 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:03:16.571621 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:03:16.573404 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:03:16.575090 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:03:16.576805 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 12 17:03:16.578954 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:03:16.580570 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:03:16.582441 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:03:16.583984 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:03:16.585631 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:03:16.587104 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:03:16.587255 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:03:16.589243 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:03:16.590224 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:03:16.591830 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:03:16.591930 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:03:16.593662 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:03:16.593793 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:03:16.596469 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:03:16.596593 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:03:16.598314 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:03:16.599915 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:03:16.600086 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:03:16.601599 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:03:16.603039 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:03:16.604501 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:03:16.604595 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:03:16.606002 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:03:16.606081 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:03:16.607404 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:03:16.607532 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:03:16.609185 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:03:16.609291 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:03:16.611888 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:03:16.613773 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:03:16.615027 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:03:16.615155 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:03:16.616670 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:03:16.616780 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:03:16.622543 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:03:16.633690 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:03:16.643680 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:03:16.657425 ignition[1040]: INFO : Ignition 2.21.0 Sep 12 17:03:16.659723 ignition[1040]: INFO : Stage: umount Sep 12 17:03:16.659723 ignition[1040]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:03:16.659723 ignition[1040]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:03:16.663002 ignition[1040]: INFO : umount: umount passed Sep 12 17:03:16.663770 ignition[1040]: INFO : Ignition finished successfully Sep 12 17:03:16.666147 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:03:16.667667 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:03:16.668760 systemd[1]: Stopped target network.target - Network. Sep 12 17:03:16.670013 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:03:16.670071 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:03:16.671540 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:03:16.671583 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:03:16.673135 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:03:16.673176 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:03:16.674620 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:03:16.674677 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:03:16.676426 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:03:16.677968 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:03:16.685819 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:03:16.685970 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:03:16.689324 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 12 17:03:16.689654 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:03:16.689695 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:03:16.693959 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 12 17:03:16.694181 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:03:16.694305 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:03:16.697178 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 12 17:03:16.697581 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 12 17:03:16.699313 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:03:16.699345 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:03:16.701379 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:03:16.702266 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:03:16.702320 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:03:16.703915 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:03:16.703962 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:03:16.705968 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:03:16.706008 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:03:16.707595 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:03:16.711110 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 17:03:16.719190 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:03:16.719336 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:03:16.721025 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:03:16.721126 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:03:16.723097 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:03:16.723162 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:03:16.724189 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:03:16.724218 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:03:16.725533 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:03:16.725575 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:03:16.727709 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:03:16.727754 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:03:16.730359 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:03:16.730413 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:03:16.733500 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:03:16.734869 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 12 17:03:16.734927 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:03:16.737741 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:03:16.737784 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:03:16.740607 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 12 17:03:16.740678 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:03:16.744477 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:03:16.744529 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:03:16.746733 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:03:16.746779 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:03:16.755180 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:03:16.755287 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:03:16.813315 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:03:16.813429 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:03:16.815078 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:03:16.816133 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:03:16.816195 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:03:16.819376 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:03:16.845664 systemd[1]: Switching root. Sep 12 17:03:16.877812 systemd-journald[242]: Journal stopped Sep 12 17:03:17.674970 systemd-journald[242]: Received SIGTERM from PID 1 (systemd). Sep 12 17:03:17.675025 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 17:03:17.675043 kernel: SELinux: policy capability open_perms=1 Sep 12 17:03:17.675053 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 17:03:17.675064 kernel: SELinux: policy capability always_check_network=0 Sep 12 17:03:17.675073 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 17:03:17.675086 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 17:03:17.675095 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 17:03:17.675105 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 17:03:17.675114 kernel: SELinux: policy capability userspace_initial_context=0 Sep 12 17:03:17.675123 kernel: audit: type=1403 audit(1757696597.120:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 17:03:17.675134 systemd[1]: Successfully loaded SELinux policy in 56.769ms. Sep 12 17:03:17.675148 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.495ms. Sep 12 17:03:17.675160 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 17:03:17.675172 systemd[1]: Detected virtualization kvm. Sep 12 17:03:17.675182 systemd[1]: Detected architecture arm64. Sep 12 17:03:17.675191 systemd[1]: Detected first boot. Sep 12 17:03:17.675200 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:03:17.675210 kernel: NET: Registered PF_VSOCK protocol family Sep 12 17:03:17.675219 zram_generator::config[1089]: No configuration found. Sep 12 17:03:17.675230 systemd[1]: Populated /etc with preset unit settings. Sep 12 17:03:17.675241 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 12 17:03:17.675252 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 17:03:17.675262 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 17:03:17.675272 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 17:03:17.675282 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 17:03:17.675292 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 17:03:17.675302 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 17:03:17.675312 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 17:03:17.675322 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 17:03:17.675333 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 17:03:17.675344 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 17:03:17.675353 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 17:03:17.675363 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:03:17.675373 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:03:17.675383 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 17:03:17.675393 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 17:03:17.675403 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 17:03:17.675413 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:03:17.675425 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 12 17:03:17.675434 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:03:17.675444 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:03:17.675454 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 17:03:17.675464 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 17:03:17.675478 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 17:03:17.675488 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 17:03:17.675499 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:03:17.675512 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:03:17.675522 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:03:17.675538 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:03:17.675551 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 17:03:17.675564 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 17:03:17.675583 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 12 17:03:17.675597 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:03:17.675612 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:03:17.675625 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:03:17.675654 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 17:03:17.675666 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 17:03:17.675679 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 17:03:17.675689 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 17:03:17.675699 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 17:03:17.675709 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 17:03:17.675720 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 17:03:17.675730 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 17:03:17.675742 systemd[1]: Reached target machines.target - Containers. Sep 12 17:03:17.675753 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 17:03:17.675763 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:03:17.675773 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:03:17.675783 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 17:03:17.675793 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:03:17.675803 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:03:17.675813 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:03:17.675823 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 17:03:17.675834 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:03:17.675845 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 17:03:17.675855 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 17:03:17.675865 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 17:03:17.675874 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 17:03:17.675884 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 17:03:17.675894 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:03:17.675904 kernel: fuse: init (API version 7.41) Sep 12 17:03:17.675915 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:03:17.675925 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:03:17.675941 kernel: ACPI: bus type drm_connector registered Sep 12 17:03:17.675952 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:03:17.675962 kernel: loop: module loaded Sep 12 17:03:17.675971 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 17:03:17.675981 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 12 17:03:17.675991 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:03:17.676003 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 17:03:17.676013 systemd[1]: Stopped verity-setup.service. Sep 12 17:03:17.676024 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 17:03:17.676033 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 17:03:17.676080 systemd-journald[1164]: Collecting audit messages is disabled. Sep 12 17:03:17.676105 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 17:03:17.676116 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 17:03:17.676127 systemd-journald[1164]: Journal started Sep 12 17:03:17.676147 systemd-journald[1164]: Runtime Journal (/run/log/journal/1186e62b668e4603b4f9319652b5e4be) is 6M, max 48.5M, 42.4M free. Sep 12 17:03:17.486579 systemd[1]: Queued start job for default target multi-user.target. Sep 12 17:03:17.495905 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 12 17:03:17.496390 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 17:03:17.678663 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:03:17.679105 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 17:03:17.680111 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 17:03:17.681142 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 17:03:17.682299 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:03:17.683560 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 17:03:17.683745 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 17:03:17.684877 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:03:17.685059 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:03:17.686202 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:03:17.686355 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:03:17.687463 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:03:17.687629 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:03:17.688773 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 17:03:17.688929 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 17:03:17.689978 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:03:17.690135 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:03:17.691505 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:03:17.692762 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:03:17.693929 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 17:03:17.695249 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 12 17:03:17.706414 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:03:17.708701 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 17:03:17.710516 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 17:03:17.711554 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 17:03:17.711588 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:03:17.713316 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 12 17:03:17.723488 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 17:03:17.724565 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:03:17.725671 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 17:03:17.727486 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 17:03:17.728668 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:03:17.731292 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 17:03:17.732685 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:03:17.733537 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:03:17.738503 systemd-journald[1164]: Time spent on flushing to /var/log/journal/1186e62b668e4603b4f9319652b5e4be is 28.169ms for 893 entries. Sep 12 17:03:17.738503 systemd-journald[1164]: System Journal (/var/log/journal/1186e62b668e4603b4f9319652b5e4be) is 8M, max 195.6M, 187.6M free. Sep 12 17:03:17.780489 systemd-journald[1164]: Received client request to flush runtime journal. Sep 12 17:03:17.780545 kernel: loop0: detected capacity change from 0 to 119320 Sep 12 17:03:17.780584 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 17:03:17.735997 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 17:03:17.739563 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:03:17.742437 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:03:17.744066 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 17:03:17.749806 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 17:03:17.751901 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 17:03:17.755132 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 17:03:17.758206 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 12 17:03:17.759785 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:03:17.760718 systemd-tmpfiles[1206]: ACLs are not supported, ignoring. Sep 12 17:03:17.760731 systemd-tmpfiles[1206]: ACLs are not supported, ignoring. Sep 12 17:03:17.763983 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:03:17.770522 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 17:03:17.786782 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 17:03:17.793787 kernel: loop1: detected capacity change from 0 to 203944 Sep 12 17:03:17.795070 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 12 17:03:17.800173 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 17:03:17.805542 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:03:17.829667 kernel: loop2: detected capacity change from 0 to 100608 Sep 12 17:03:17.831282 systemd-tmpfiles[1227]: ACLs are not supported, ignoring. Sep 12 17:03:17.831579 systemd-tmpfiles[1227]: ACLs are not supported, ignoring. Sep 12 17:03:17.835123 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:03:17.870679 kernel: loop3: detected capacity change from 0 to 119320 Sep 12 17:03:17.876665 kernel: loop4: detected capacity change from 0 to 203944 Sep 12 17:03:17.884714 kernel: loop5: detected capacity change from 0 to 100608 Sep 12 17:03:17.889159 (sd-merge)[1231]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 12 17:03:17.889537 (sd-merge)[1231]: Merged extensions into '/usr'. Sep 12 17:03:17.893967 systemd[1]: Reload requested from client PID 1205 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 17:03:17.893989 systemd[1]: Reloading... Sep 12 17:03:17.956681 zram_generator::config[1260]: No configuration found. Sep 12 17:03:18.010713 ldconfig[1200]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 17:03:18.101490 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 17:03:18.101615 systemd[1]: Reloading finished in 207 ms. Sep 12 17:03:18.142313 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 17:03:18.143529 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 17:03:18.156796 systemd[1]: Starting ensure-sysext.service... Sep 12 17:03:18.158472 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:03:18.167655 systemd[1]: Reload requested from client PID 1291 ('systemctl') (unit ensure-sysext.service)... Sep 12 17:03:18.167675 systemd[1]: Reloading... Sep 12 17:03:18.172119 systemd-tmpfiles[1292]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 12 17:03:18.172152 systemd-tmpfiles[1292]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 12 17:03:18.172401 systemd-tmpfiles[1292]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 17:03:18.172579 systemd-tmpfiles[1292]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 17:03:18.173245 systemd-tmpfiles[1292]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 17:03:18.173446 systemd-tmpfiles[1292]: ACLs are not supported, ignoring. Sep 12 17:03:18.173492 systemd-tmpfiles[1292]: ACLs are not supported, ignoring. Sep 12 17:03:18.176848 systemd-tmpfiles[1292]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:03:18.176862 systemd-tmpfiles[1292]: Skipping /boot Sep 12 17:03:18.182689 systemd-tmpfiles[1292]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:03:18.182702 systemd-tmpfiles[1292]: Skipping /boot Sep 12 17:03:18.209671 zram_generator::config[1317]: No configuration found. Sep 12 17:03:18.345341 systemd[1]: Reloading finished in 177 ms. Sep 12 17:03:18.365658 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 17:03:18.370956 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:03:18.381746 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 17:03:18.384062 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 17:03:18.385962 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 17:03:18.388785 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:03:18.391810 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:03:18.393880 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 17:03:18.399397 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 17:03:18.401705 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:03:18.409552 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:03:18.413490 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:03:18.416035 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:03:18.417072 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:03:18.417190 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:03:18.419844 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 17:03:18.423268 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:03:18.423415 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:03:18.424842 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:03:18.424992 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:03:18.428437 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:03:18.428747 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:03:18.432623 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 17:03:18.441654 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 17:03:18.443203 systemd[1]: Finished ensure-sysext.service. Sep 12 17:03:18.446244 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:03:18.447850 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:03:18.450022 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:03:18.452541 systemd-udevd[1360]: Using default interface naming scheme 'v255'. Sep 12 17:03:18.453010 augenrules[1392]: No rules Sep 12 17:03:18.454855 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:03:18.459805 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:03:18.460710 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:03:18.460765 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:03:18.462438 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 17:03:18.467270 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 17:03:18.468317 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:03:18.468692 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 17:03:18.470039 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:03:18.470264 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 17:03:18.473083 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:03:18.478388 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:03:18.479892 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:03:18.480063 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:03:18.481229 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:03:18.481382 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:03:18.482526 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:03:18.488235 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:03:18.488429 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:03:18.490827 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 17:03:18.512264 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:03:18.513490 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:03:18.513555 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:03:18.529001 systemd-resolved[1358]: Positive Trust Anchors: Sep 12 17:03:18.529019 systemd-resolved[1358]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:03:18.529051 systemd-resolved[1358]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:03:18.535479 systemd-resolved[1358]: Defaulting to hostname 'linux'. Sep 12 17:03:18.537007 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:03:18.538925 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:03:18.566356 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 12 17:03:18.608727 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 17:03:18.611273 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 17:03:18.644685 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 17:03:18.645902 systemd-networkd[1440]: lo: Link UP Sep 12 17:03:18.645915 systemd-networkd[1440]: lo: Gained carrier Sep 12 17:03:18.646162 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 17:03:18.646732 systemd-networkd[1440]: Enumeration completed Sep 12 17:03:18.647155 systemd-networkd[1440]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:03:18.647165 systemd-networkd[1440]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:03:18.647307 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:03:18.647716 systemd-networkd[1440]: eth0: Link UP Sep 12 17:03:18.647829 systemd-networkd[1440]: eth0: Gained carrier Sep 12 17:03:18.647848 systemd-networkd[1440]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:03:18.649311 systemd[1]: Reached target network.target - Network. Sep 12 17:03:18.650672 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:03:18.652373 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 17:03:18.653769 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 17:03:18.654894 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 17:03:18.656042 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 17:03:18.656077 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:03:18.656721 systemd-networkd[1440]: eth0: DHCPv4 address 10.0.0.14/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 17:03:18.656968 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 17:03:18.658397 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 17:03:18.658744 systemd-timesyncd[1404]: Network configuration changed, trying to establish connection. Sep 12 17:03:18.659502 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 17:03:18.659508 systemd-timesyncd[1404]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 12 17:03:18.659570 systemd-timesyncd[1404]: Initial clock synchronization to Fri 2025-09-12 17:03:18.726871 UTC. Sep 12 17:03:18.660876 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:03:18.662892 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 17:03:18.665154 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 17:03:18.669602 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 12 17:03:18.670839 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 12 17:03:18.672495 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 12 17:03:18.677132 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 17:03:18.678528 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 12 17:03:18.684129 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 12 17:03:18.688895 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 17:03:18.690469 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 17:03:18.691490 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:03:18.693527 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:03:18.694403 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:03:18.694432 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:03:18.695817 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 17:03:18.698767 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 17:03:18.701521 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 17:03:18.704356 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 17:03:18.708548 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 17:03:18.709715 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 17:03:18.712600 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 17:03:18.715229 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 17:03:18.717528 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 17:03:18.721220 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 17:03:18.729786 jq[1475]: false Sep 12 17:03:18.730131 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 17:03:18.733481 extend-filesystems[1476]: Found /dev/vda6 Sep 12 17:03:18.733565 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 17:03:18.734084 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 17:03:18.735775 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 17:03:18.735861 extend-filesystems[1476]: Found /dev/vda9 Sep 12 17:03:18.738240 extend-filesystems[1476]: Checking size of /dev/vda9 Sep 12 17:03:18.744151 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 17:03:18.751079 jq[1495]: true Sep 12 17:03:18.751404 extend-filesystems[1476]: Resized partition /dev/vda9 Sep 12 17:03:18.752683 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 17:03:18.755068 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 17:03:18.755259 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 17:03:18.755503 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 17:03:18.755703 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 17:03:18.757512 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 17:03:18.757726 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 17:03:18.759700 extend-filesystems[1500]: resize2fs 1.47.2 (1-Jan-2025) Sep 12 17:03:18.774062 update_engine[1491]: I20250912 17:03:18.773607 1491 main.cc:92] Flatcar Update Engine starting Sep 12 17:03:18.781656 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 12 17:03:18.780866 (ntainerd)[1504]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 17:03:18.781203 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:03:18.797025 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 12 17:03:18.798312 jq[1503]: true Sep 12 17:03:18.805660 tar[1501]: linux-arm64/helm Sep 12 17:03:18.815659 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 12 17:03:18.824282 dbus-daemon[1472]: [system] SELinux support is enabled Sep 12 17:03:18.825007 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 17:03:18.828462 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 17:03:18.828496 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 17:03:18.830369 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 17:03:18.830384 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 17:03:18.833318 extend-filesystems[1500]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 12 17:03:18.833318 extend-filesystems[1500]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 17:03:18.833318 extend-filesystems[1500]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 12 17:03:18.840478 extend-filesystems[1476]: Resized filesystem in /dev/vda9 Sep 12 17:03:18.845754 update_engine[1491]: I20250912 17:03:18.835033 1491 update_check_scheduler.cc:74] Next update check in 11m11s Sep 12 17:03:18.833834 systemd[1]: Started update-engine.service - Update Engine. Sep 12 17:03:18.836421 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 17:03:18.840122 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 17:03:18.840383 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 17:03:18.869863 bash[1539]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:03:18.875634 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 17:03:18.884817 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:03:18.886873 systemd-logind[1486]: Watching system buttons on /dev/input/event0 (Power Button) Sep 12 17:03:18.887344 systemd-logind[1486]: New seat seat0. Sep 12 17:03:18.890739 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 17:03:18.892381 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 12 17:03:18.928090 locksmithd[1527]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 17:03:18.953042 containerd[1504]: time="2025-09-12T17:03:18Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 12 17:03:18.953925 containerd[1504]: time="2025-09-12T17:03:18.953891160Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 12 17:03:18.966235 containerd[1504]: time="2025-09-12T17:03:18.966190160Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.4µs" Sep 12 17:03:18.967674 containerd[1504]: time="2025-09-12T17:03:18.966324200Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 12 17:03:18.967674 containerd[1504]: time="2025-09-12T17:03:18.966348520Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 12 17:03:18.967674 containerd[1504]: time="2025-09-12T17:03:18.966496640Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 12 17:03:18.967674 containerd[1504]: time="2025-09-12T17:03:18.966512800Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 12 17:03:18.967674 containerd[1504]: time="2025-09-12T17:03:18.966534960Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 17:03:18.967674 containerd[1504]: time="2025-09-12T17:03:18.966581200Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 17:03:18.967674 containerd[1504]: time="2025-09-12T17:03:18.966591760Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 17:03:18.967674 containerd[1504]: time="2025-09-12T17:03:18.966841320Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 17:03:18.967674 containerd[1504]: time="2025-09-12T17:03:18.966858560Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 17:03:18.967674 containerd[1504]: time="2025-09-12T17:03:18.966887720Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 17:03:18.967674 containerd[1504]: time="2025-09-12T17:03:18.966896320Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 12 17:03:18.967674 containerd[1504]: time="2025-09-12T17:03:18.966986240Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 12 17:03:18.967917 containerd[1504]: time="2025-09-12T17:03:18.967177000Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 17:03:18.967917 containerd[1504]: time="2025-09-12T17:03:18.967204520Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 17:03:18.967917 containerd[1504]: time="2025-09-12T17:03:18.967215680Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 12 17:03:18.967917 containerd[1504]: time="2025-09-12T17:03:18.967251000Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 12 17:03:18.967917 containerd[1504]: time="2025-09-12T17:03:18.967462280Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 12 17:03:18.967917 containerd[1504]: time="2025-09-12T17:03:18.967521880Z" level=info msg="metadata content store policy set" policy=shared Sep 12 17:03:18.971383 containerd[1504]: time="2025-09-12T17:03:18.971349040Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 12 17:03:18.971510 containerd[1504]: time="2025-09-12T17:03:18.971497120Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 12 17:03:18.971604 containerd[1504]: time="2025-09-12T17:03:18.971590680Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 12 17:03:18.971670 containerd[1504]: time="2025-09-12T17:03:18.971656240Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 12 17:03:18.971776 containerd[1504]: time="2025-09-12T17:03:18.971761440Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 12 17:03:18.971833 containerd[1504]: time="2025-09-12T17:03:18.971821360Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 12 17:03:18.971882 containerd[1504]: time="2025-09-12T17:03:18.971870720Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 12 17:03:18.971973 containerd[1504]: time="2025-09-12T17:03:18.971958160Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 12 17:03:18.972029 containerd[1504]: time="2025-09-12T17:03:18.972015680Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 12 17:03:18.972086 containerd[1504]: time="2025-09-12T17:03:18.972074720Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 12 17:03:18.972135 containerd[1504]: time="2025-09-12T17:03:18.972123560Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 12 17:03:18.972188 containerd[1504]: time="2025-09-12T17:03:18.972176520Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 12 17:03:18.972367 containerd[1504]: time="2025-09-12T17:03:18.972345640Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 12 17:03:18.972446 containerd[1504]: time="2025-09-12T17:03:18.972431440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 12 17:03:18.972501 containerd[1504]: time="2025-09-12T17:03:18.972489400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 12 17:03:18.972550 containerd[1504]: time="2025-09-12T17:03:18.972539040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 12 17:03:18.972599 containerd[1504]: time="2025-09-12T17:03:18.972587880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 12 17:03:18.972672 containerd[1504]: time="2025-09-12T17:03:18.972636600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 12 17:03:18.972743 containerd[1504]: time="2025-09-12T17:03:18.972729440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 12 17:03:18.972796 containerd[1504]: time="2025-09-12T17:03:18.972785320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 12 17:03:18.972844 containerd[1504]: time="2025-09-12T17:03:18.972833960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 12 17:03:18.972891 containerd[1504]: time="2025-09-12T17:03:18.972880200Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 12 17:03:18.972952 containerd[1504]: time="2025-09-12T17:03:18.972938320Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 12 17:03:18.973183 containerd[1504]: time="2025-09-12T17:03:18.973168320Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 12 17:03:18.973245 containerd[1504]: time="2025-09-12T17:03:18.973233840Z" level=info msg="Start snapshots syncer" Sep 12 17:03:18.973315 containerd[1504]: time="2025-09-12T17:03:18.973304280Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 12 17:03:18.973757 containerd[1504]: time="2025-09-12T17:03:18.973716720Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 12 17:03:18.973922 containerd[1504]: time="2025-09-12T17:03:18.973905600Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 12 17:03:18.974090 containerd[1504]: time="2025-09-12T17:03:18.974062080Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 12 17:03:18.974383 containerd[1504]: time="2025-09-12T17:03:18.974361280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 12 17:03:18.974459 containerd[1504]: time="2025-09-12T17:03:18.974445960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 12 17:03:18.974510 containerd[1504]: time="2025-09-12T17:03:18.974498160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 12 17:03:18.974562 containerd[1504]: time="2025-09-12T17:03:18.974550360Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 12 17:03:18.974612 containerd[1504]: time="2025-09-12T17:03:18.974601080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 12 17:03:18.974684 containerd[1504]: time="2025-09-12T17:03:18.974670880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 12 17:03:18.974753 containerd[1504]: time="2025-09-12T17:03:18.974739040Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 12 17:03:18.974817 containerd[1504]: time="2025-09-12T17:03:18.974804840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 12 17:03:18.974868 containerd[1504]: time="2025-09-12T17:03:18.974856560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 12 17:03:18.974918 containerd[1504]: time="2025-09-12T17:03:18.974906640Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 12 17:03:18.975027 containerd[1504]: time="2025-09-12T17:03:18.975011600Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 17:03:18.975089 containerd[1504]: time="2025-09-12T17:03:18.975074800Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 17:03:18.975134 containerd[1504]: time="2025-09-12T17:03:18.975122280Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 17:03:18.975190 containerd[1504]: time="2025-09-12T17:03:18.975177760Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 17:03:18.975233 containerd[1504]: time="2025-09-12T17:03:18.975221960Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 12 17:03:18.975279 containerd[1504]: time="2025-09-12T17:03:18.975267160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 12 17:03:18.975327 containerd[1504]: time="2025-09-12T17:03:18.975316000Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 12 17:03:18.975441 containerd[1504]: time="2025-09-12T17:03:18.975430880Z" level=info msg="runtime interface created" Sep 12 17:03:18.975480 containerd[1504]: time="2025-09-12T17:03:18.975470720Z" level=info msg="created NRI interface" Sep 12 17:03:18.975531 containerd[1504]: time="2025-09-12T17:03:18.975519520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 12 17:03:18.975580 containerd[1504]: time="2025-09-12T17:03:18.975569680Z" level=info msg="Connect containerd service" Sep 12 17:03:18.975679 containerd[1504]: time="2025-09-12T17:03:18.975665520Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 17:03:18.976531 containerd[1504]: time="2025-09-12T17:03:18.976505960Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:03:19.048760 containerd[1504]: time="2025-09-12T17:03:19.048606698Z" level=info msg="Start subscribing containerd event" Sep 12 17:03:19.048760 containerd[1504]: time="2025-09-12T17:03:19.048749169Z" level=info msg="Start recovering state" Sep 12 17:03:19.048873 containerd[1504]: time="2025-09-12T17:03:19.048850818Z" level=info msg="Start event monitor" Sep 12 17:03:19.048914 containerd[1504]: time="2025-09-12T17:03:19.048866025Z" level=info msg="Start cni network conf syncer for default" Sep 12 17:03:19.048914 containerd[1504]: time="2025-09-12T17:03:19.048886396Z" level=info msg="Start streaming server" Sep 12 17:03:19.048914 containerd[1504]: time="2025-09-12T17:03:19.048895956Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 12 17:03:19.048914 containerd[1504]: time="2025-09-12T17:03:19.048903337Z" level=info msg="runtime interface starting up..." Sep 12 17:03:19.048914 containerd[1504]: time="2025-09-12T17:03:19.048909186Z" level=info msg="starting plugins..." Sep 12 17:03:19.048992 containerd[1504]: time="2025-09-12T17:03:19.048922175Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 12 17:03:19.049304 containerd[1504]: time="2025-09-12T17:03:19.049150564Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 17:03:19.050730 containerd[1504]: time="2025-09-12T17:03:19.049547118Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 17:03:19.050730 containerd[1504]: time="2025-09-12T17:03:19.049623799Z" level=info msg="containerd successfully booted in 0.096931s" Sep 12 17:03:19.049740 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 17:03:19.115554 tar[1501]: linux-arm64/LICENSE Sep 12 17:03:19.115682 tar[1501]: linux-arm64/README.md Sep 12 17:03:19.129973 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 17:03:20.382849 systemd-networkd[1440]: eth0: Gained IPv6LL Sep 12 17:03:20.386242 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 17:03:20.388071 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 17:03:20.391172 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 12 17:03:20.405848 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:03:20.407707 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 17:03:20.429631 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 12 17:03:20.429871 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 12 17:03:20.431513 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 17:03:20.433579 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 17:03:20.510645 sshd_keygen[1505]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 17:03:20.531305 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 17:03:20.533946 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 17:03:20.556159 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 17:03:20.556428 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 17:03:20.560827 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 17:03:20.580731 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 17:03:20.584072 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 17:03:20.586165 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 12 17:03:20.587359 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 17:03:20.996515 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:03:20.998046 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 17:03:20.999365 systemd[1]: Startup finished in 2.018s (kernel) + 5.509s (initrd) + 3.935s (userspace) = 11.463s. Sep 12 17:03:21.001987 (kubelet)[1613]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:03:21.361986 kubelet[1613]: E0912 17:03:21.361877 1613 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:03:21.364459 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:03:21.364622 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:03:21.365775 systemd[1]: kubelet.service: Consumed 772ms CPU time, 255.6M memory peak. Sep 12 17:03:24.695268 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 17:03:24.697993 systemd[1]: Started sshd@0-10.0.0.14:22-10.0.0.1:34988.service - OpenSSH per-connection server daemon (10.0.0.1:34988). Sep 12 17:03:24.786898 sshd[1626]: Accepted publickey for core from 10.0.0.1 port 34988 ssh2: RSA SHA256:zGEfOPQeThQpVzN9sa62qLAN/hzBbgDs44dXJ/syq0Y Sep 12 17:03:24.789134 sshd-session[1626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:03:24.797233 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 17:03:24.798381 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 17:03:24.807190 systemd-logind[1486]: New session 1 of user core. Sep 12 17:03:24.821676 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 17:03:24.824454 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 17:03:24.844354 (systemd)[1631]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 17:03:24.847001 systemd-logind[1486]: New session c1 of user core. Sep 12 17:03:24.972666 systemd[1631]: Queued start job for default target default.target. Sep 12 17:03:24.992763 systemd[1631]: Created slice app.slice - User Application Slice. Sep 12 17:03:24.992796 systemd[1631]: Reached target paths.target - Paths. Sep 12 17:03:24.992843 systemd[1631]: Reached target timers.target - Timers. Sep 12 17:03:24.994184 systemd[1631]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 17:03:25.005348 systemd[1631]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 17:03:25.005474 systemd[1631]: Reached target sockets.target - Sockets. Sep 12 17:03:25.005517 systemd[1631]: Reached target basic.target - Basic System. Sep 12 17:03:25.005547 systemd[1631]: Reached target default.target - Main User Target. Sep 12 17:03:25.005574 systemd[1631]: Startup finished in 152ms. Sep 12 17:03:25.005946 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 17:03:25.007378 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 17:03:25.069050 systemd[1]: Started sshd@1-10.0.0.14:22-10.0.0.1:35002.service - OpenSSH per-connection server daemon (10.0.0.1:35002). Sep 12 17:03:25.123773 sshd[1642]: Accepted publickey for core from 10.0.0.1 port 35002 ssh2: RSA SHA256:zGEfOPQeThQpVzN9sa62qLAN/hzBbgDs44dXJ/syq0Y Sep 12 17:03:25.125188 sshd-session[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:03:25.130265 systemd-logind[1486]: New session 2 of user core. Sep 12 17:03:25.140843 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 17:03:25.193481 sshd[1645]: Connection closed by 10.0.0.1 port 35002 Sep 12 17:03:25.193964 sshd-session[1642]: pam_unix(sshd:session): session closed for user core Sep 12 17:03:25.212020 systemd[1]: sshd@1-10.0.0.14:22-10.0.0.1:35002.service: Deactivated successfully. Sep 12 17:03:25.214271 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 17:03:25.214921 systemd-logind[1486]: Session 2 logged out. Waiting for processes to exit. Sep 12 17:03:25.217347 systemd[1]: Started sshd@2-10.0.0.14:22-10.0.0.1:35004.service - OpenSSH per-connection server daemon (10.0.0.1:35004). Sep 12 17:03:25.218242 systemd-logind[1486]: Removed session 2. Sep 12 17:03:25.272679 sshd[1651]: Accepted publickey for core from 10.0.0.1 port 35004 ssh2: RSA SHA256:zGEfOPQeThQpVzN9sa62qLAN/hzBbgDs44dXJ/syq0Y Sep 12 17:03:25.274275 sshd-session[1651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:03:25.278179 systemd-logind[1486]: New session 3 of user core. Sep 12 17:03:25.294842 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 17:03:25.346342 sshd[1654]: Connection closed by 10.0.0.1 port 35004 Sep 12 17:03:25.346696 sshd-session[1651]: pam_unix(sshd:session): session closed for user core Sep 12 17:03:25.365730 systemd[1]: sshd@2-10.0.0.14:22-10.0.0.1:35004.service: Deactivated successfully. Sep 12 17:03:25.367155 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 17:03:25.367964 systemd-logind[1486]: Session 3 logged out. Waiting for processes to exit. Sep 12 17:03:25.370133 systemd[1]: Started sshd@3-10.0.0.14:22-10.0.0.1:35006.service - OpenSSH per-connection server daemon (10.0.0.1:35006). Sep 12 17:03:25.370662 systemd-logind[1486]: Removed session 3. Sep 12 17:03:25.433192 sshd[1660]: Accepted publickey for core from 10.0.0.1 port 35006 ssh2: RSA SHA256:zGEfOPQeThQpVzN9sa62qLAN/hzBbgDs44dXJ/syq0Y Sep 12 17:03:25.434691 sshd-session[1660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:03:25.439264 systemd-logind[1486]: New session 4 of user core. Sep 12 17:03:25.453891 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 17:03:25.507034 sshd[1663]: Connection closed by 10.0.0.1 port 35006 Sep 12 17:03:25.507397 sshd-session[1660]: pam_unix(sshd:session): session closed for user core Sep 12 17:03:25.519840 systemd[1]: sshd@3-10.0.0.14:22-10.0.0.1:35006.service: Deactivated successfully. Sep 12 17:03:25.521461 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 17:03:25.523783 systemd-logind[1486]: Session 4 logged out. Waiting for processes to exit. Sep 12 17:03:25.526960 systemd[1]: Started sshd@4-10.0.0.14:22-10.0.0.1:35022.service - OpenSSH per-connection server daemon (10.0.0.1:35022). Sep 12 17:03:25.527669 systemd-logind[1486]: Removed session 4. Sep 12 17:03:25.602176 sshd[1669]: Accepted publickey for core from 10.0.0.1 port 35022 ssh2: RSA SHA256:zGEfOPQeThQpVzN9sa62qLAN/hzBbgDs44dXJ/syq0Y Sep 12 17:03:25.603724 sshd-session[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:03:25.609887 systemd-logind[1486]: New session 5 of user core. Sep 12 17:03:25.616946 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 17:03:25.678264 sudo[1673]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 17:03:25.678561 sudo[1673]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:03:25.697825 sudo[1673]: pam_unix(sudo:session): session closed for user root Sep 12 17:03:25.700098 sshd[1672]: Connection closed by 10.0.0.1 port 35022 Sep 12 17:03:25.702390 sshd-session[1669]: pam_unix(sshd:session): session closed for user core Sep 12 17:03:25.717480 systemd[1]: sshd@4-10.0.0.14:22-10.0.0.1:35022.service: Deactivated successfully. Sep 12 17:03:25.720011 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 17:03:25.722393 systemd-logind[1486]: Session 5 logged out. Waiting for processes to exit. Sep 12 17:03:25.725454 systemd[1]: Started sshd@5-10.0.0.14:22-10.0.0.1:35028.service - OpenSSH per-connection server daemon (10.0.0.1:35028). Sep 12 17:03:25.726130 systemd-logind[1486]: Removed session 5. Sep 12 17:03:25.785364 sshd[1679]: Accepted publickey for core from 10.0.0.1 port 35028 ssh2: RSA SHA256:zGEfOPQeThQpVzN9sa62qLAN/hzBbgDs44dXJ/syq0Y Sep 12 17:03:25.786768 sshd-session[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:03:25.792707 systemd-logind[1486]: New session 6 of user core. Sep 12 17:03:25.802908 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 17:03:25.858453 sudo[1684]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 17:03:25.859062 sudo[1684]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:03:25.866191 sudo[1684]: pam_unix(sudo:session): session closed for user root Sep 12 17:03:25.872793 sudo[1683]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 12 17:03:25.873067 sudo[1683]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:03:25.885174 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 17:03:25.932510 augenrules[1706]: No rules Sep 12 17:03:25.933961 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:03:25.934456 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 17:03:25.935868 sudo[1683]: pam_unix(sudo:session): session closed for user root Sep 12 17:03:25.937265 sshd[1682]: Connection closed by 10.0.0.1 port 35028 Sep 12 17:03:25.937701 sshd-session[1679]: pam_unix(sshd:session): session closed for user core Sep 12 17:03:25.949154 systemd[1]: sshd@5-10.0.0.14:22-10.0.0.1:35028.service: Deactivated successfully. Sep 12 17:03:25.951254 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 17:03:25.952121 systemd-logind[1486]: Session 6 logged out. Waiting for processes to exit. Sep 12 17:03:25.954510 systemd[1]: Started sshd@6-10.0.0.14:22-10.0.0.1:35036.service - OpenSSH per-connection server daemon (10.0.0.1:35036). Sep 12 17:03:25.955482 systemd-logind[1486]: Removed session 6. Sep 12 17:03:26.014732 sshd[1715]: Accepted publickey for core from 10.0.0.1 port 35036 ssh2: RSA SHA256:zGEfOPQeThQpVzN9sa62qLAN/hzBbgDs44dXJ/syq0Y Sep 12 17:03:26.015267 sshd-session[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:03:26.023027 systemd-logind[1486]: New session 7 of user core. Sep 12 17:03:26.039908 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 17:03:26.091745 sudo[1719]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 17:03:26.092023 sudo[1719]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:03:26.399546 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 17:03:26.421101 (dockerd)[1740]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 17:03:26.647357 dockerd[1740]: time="2025-09-12T17:03:26.647280139Z" level=info msg="Starting up" Sep 12 17:03:26.648301 dockerd[1740]: time="2025-09-12T17:03:26.648275383Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 12 17:03:26.662362 dockerd[1740]: time="2025-09-12T17:03:26.662224602Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 12 17:03:26.783771 dockerd[1740]: time="2025-09-12T17:03:26.783710509Z" level=info msg="Loading containers: start." Sep 12 17:03:26.793689 kernel: Initializing XFRM netlink socket Sep 12 17:03:27.175003 systemd-networkd[1440]: docker0: Link UP Sep 12 17:03:27.206809 dockerd[1740]: time="2025-09-12T17:03:27.206568419Z" level=info msg="Loading containers: done." Sep 12 17:03:27.263892 dockerd[1740]: time="2025-09-12T17:03:27.263784651Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 17:03:27.263892 dockerd[1740]: time="2025-09-12T17:03:27.263897938Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 12 17:03:27.264122 dockerd[1740]: time="2025-09-12T17:03:27.263991207Z" level=info msg="Initializing buildkit" Sep 12 17:03:27.429402 dockerd[1740]: time="2025-09-12T17:03:27.429251374Z" level=info msg="Completed buildkit initialization" Sep 12 17:03:27.435241 dockerd[1740]: time="2025-09-12T17:03:27.434465993Z" level=info msg="Daemon has completed initialization" Sep 12 17:03:27.435241 dockerd[1740]: time="2025-09-12T17:03:27.434542654Z" level=info msg="API listen on /run/docker.sock" Sep 12 17:03:27.434858 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 17:03:28.044286 containerd[1504]: time="2025-09-12T17:03:28.044196896Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 12 17:03:28.838355 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1993926041.mount: Deactivated successfully. Sep 12 17:03:30.078243 containerd[1504]: time="2025-09-12T17:03:30.078162642Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:03:30.079370 containerd[1504]: time="2025-09-12T17:03:30.079334792Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=25687327" Sep 12 17:03:30.080666 containerd[1504]: time="2025-09-12T17:03:30.080496281Z" level=info msg="ImageCreate event name:\"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:03:30.084426 containerd[1504]: time="2025-09-12T17:03:30.084368540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:03:30.085690 containerd[1504]: time="2025-09-12T17:03:30.085512796Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"25683924\" in 2.041093143s" Sep 12 17:03:30.085690 containerd[1504]: time="2025-09-12T17:03:30.085555077Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\"" Sep 12 17:03:30.086936 containerd[1504]: time="2025-09-12T17:03:30.086883770Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 12 17:03:31.545361 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 17:03:31.547904 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:03:31.691883 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:03:31.696495 (kubelet)[2026]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:03:31.754014 containerd[1504]: time="2025-09-12T17:03:31.753960534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:03:31.754759 containerd[1504]: time="2025-09-12T17:03:31.754701991Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=22459769" Sep 12 17:03:31.755829 containerd[1504]: time="2025-09-12T17:03:31.755797247Z" level=info msg="ImageCreate event name:\"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:03:31.759069 containerd[1504]: time="2025-09-12T17:03:31.759024036Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"24028542\" in 1.672093939s" Sep 12 17:03:31.759069 containerd[1504]: time="2025-09-12T17:03:31.759070475Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\"" Sep 12 17:03:31.761805 containerd[1504]: time="2025-09-12T17:03:31.761741882Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 12 17:03:31.761903 containerd[1504]: time="2025-09-12T17:03:31.761771052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:03:31.793595 kubelet[2026]: E0912 17:03:31.793543 2026 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:03:31.796877 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:03:31.797013 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:03:31.797575 systemd[1]: kubelet.service: Consumed 174ms CPU time, 109M memory peak. Sep 12 17:03:33.337500 containerd[1504]: time="2025-09-12T17:03:33.337435096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:03:33.338194 containerd[1504]: time="2025-09-12T17:03:33.338137888Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=17127508" Sep 12 17:03:33.339494 containerd[1504]: time="2025-09-12T17:03:33.339447308Z" level=info msg="ImageCreate event name:\"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:03:33.341674 containerd[1504]: time="2025-09-12T17:03:33.341390751Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:03:33.342649 containerd[1504]: time="2025-09-12T17:03:33.342594393Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"18696299\" in 1.580779992s" Sep 12 17:03:33.342803 containerd[1504]: time="2025-09-12T17:03:33.342733614Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\"" Sep 12 17:03:33.343305 containerd[1504]: time="2025-09-12T17:03:33.343280404Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 12 17:03:34.460581 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2949260949.mount: Deactivated successfully. Sep 12 17:03:34.692424 containerd[1504]: time="2025-09-12T17:03:34.692371708Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:03:34.693372 containerd[1504]: time="2025-09-12T17:03:34.693183350Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=26954909" Sep 12 17:03:34.694344 containerd[1504]: time="2025-09-12T17:03:34.694310510Z" level=info msg="ImageCreate event name:\"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:03:34.696150 containerd[1504]: time="2025-09-12T17:03:34.696104989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:03:34.697083 containerd[1504]: time="2025-09-12T17:03:34.697033363Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"26953926\" in 1.353717636s" Sep 12 17:03:34.697083 containerd[1504]: time="2025-09-12T17:03:34.697069164Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\"" Sep 12 17:03:34.697818 containerd[1504]: time="2025-09-12T17:03:34.697794708Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 17:03:35.203245 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4221537664.mount: Deactivated successfully. Sep 12 17:03:35.921613 containerd[1504]: time="2025-09-12T17:03:35.920594630Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:03:35.921613 containerd[1504]: time="2025-09-12T17:03:35.921573523Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Sep 12 17:03:35.922097 containerd[1504]: time="2025-09-12T17:03:35.922067935Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:03:35.924568 containerd[1504]: time="2025-09-12T17:03:35.924538871Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:03:35.925764 containerd[1504]: time="2025-09-12T17:03:35.925712959Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.227799797s" Sep 12 17:03:35.925764 containerd[1504]: time="2025-09-12T17:03:35.925752638Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 12 17:03:35.926294 containerd[1504]: time="2025-09-12T17:03:35.926185989Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 17:03:36.363895 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2184874227.mount: Deactivated successfully. Sep 12 17:03:36.367748 containerd[1504]: time="2025-09-12T17:03:36.367695474Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:03:36.368290 containerd[1504]: time="2025-09-12T17:03:36.368236625Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 12 17:03:36.370708 containerd[1504]: time="2025-09-12T17:03:36.370098685Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:03:36.374182 containerd[1504]: time="2025-09-12T17:03:36.374131313Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:03:36.375099 containerd[1504]: time="2025-09-12T17:03:36.375068729Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 448.349811ms" Sep 12 17:03:36.375216 containerd[1504]: time="2025-09-12T17:03:36.375199803Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 12 17:03:36.375723 containerd[1504]: time="2025-09-12T17:03:36.375698397Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 12 17:03:36.901330 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount41368292.mount: Deactivated successfully. Sep 12 17:03:38.528390 containerd[1504]: time="2025-09-12T17:03:38.528302612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:03:38.528991 containerd[1504]: time="2025-09-12T17:03:38.528936114Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66537163" Sep 12 17:03:38.530563 containerd[1504]: time="2025-09-12T17:03:38.530525933Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:03:38.534718 containerd[1504]: time="2025-09-12T17:03:38.534683183Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:03:38.536087 containerd[1504]: time="2025-09-12T17:03:38.535843836Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.160115416s" Sep 12 17:03:38.536087 containerd[1504]: time="2025-09-12T17:03:38.535884743Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 12 17:03:42.045498 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 17:03:42.047686 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:03:42.236692 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:03:42.241224 (kubelet)[2189]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:03:42.277111 kubelet[2189]: E0912 17:03:42.277044 2189 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:03:42.279733 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:03:42.280021 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:03:42.281736 systemd[1]: kubelet.service: Consumed 142ms CPU time, 107.3M memory peak. Sep 12 17:03:44.678021 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:03:44.678338 systemd[1]: kubelet.service: Consumed 142ms CPU time, 107.3M memory peak. Sep 12 17:03:44.681589 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:03:44.714884 systemd[1]: Reload requested from client PID 2204 ('systemctl') (unit session-7.scope)... Sep 12 17:03:44.714906 systemd[1]: Reloading... Sep 12 17:03:44.802855 zram_generator::config[2247]: No configuration found. Sep 12 17:03:45.110069 systemd[1]: Reloading finished in 394 ms. Sep 12 17:03:45.171188 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 17:03:45.171273 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 17:03:45.171538 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:03:45.171584 systemd[1]: kubelet.service: Consumed 97ms CPU time, 95M memory peak. Sep 12 17:03:45.173370 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:03:45.322452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:03:45.328171 (kubelet)[2292]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:03:45.370732 kubelet[2292]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:03:45.370732 kubelet[2292]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 17:03:45.370732 kubelet[2292]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:03:45.370732 kubelet[2292]: I0912 17:03:45.370468 2292 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:03:46.268484 kubelet[2292]: I0912 17:03:46.268414 2292 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 17:03:46.268484 kubelet[2292]: I0912 17:03:46.268452 2292 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:03:46.268748 kubelet[2292]: I0912 17:03:46.268721 2292 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 17:03:46.291027 kubelet[2292]: I0912 17:03:46.290784 2292 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:03:46.291442 kubelet[2292]: E0912 17:03:46.291366 2292 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:03:46.300601 kubelet[2292]: I0912 17:03:46.300519 2292 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 17:03:46.304089 kubelet[2292]: I0912 17:03:46.304068 2292 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:03:46.305695 kubelet[2292]: I0912 17:03:46.305005 2292 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 17:03:46.305695 kubelet[2292]: I0912 17:03:46.305150 2292 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:03:46.305695 kubelet[2292]: I0912 17:03:46.305178 2292 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:03:46.305695 kubelet[2292]: I0912 17:03:46.305408 2292 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:03:46.305879 kubelet[2292]: I0912 17:03:46.305416 2292 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 17:03:46.305879 kubelet[2292]: I0912 17:03:46.305667 2292 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:03:46.307839 kubelet[2292]: I0912 17:03:46.307806 2292 kubelet.go:408] "Attempting to sync node with API server" Sep 12 17:03:46.307877 kubelet[2292]: I0912 17:03:46.307840 2292 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:03:46.307877 kubelet[2292]: I0912 17:03:46.307861 2292 kubelet.go:314] "Adding apiserver pod source" Sep 12 17:03:46.307953 kubelet[2292]: I0912 17:03:46.307942 2292 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:03:46.310448 kubelet[2292]: W0912 17:03:46.310248 2292 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Sep 12 17:03:46.310448 kubelet[2292]: E0912 17:03:46.310369 2292 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:03:46.310955 kubelet[2292]: W0912 17:03:46.310921 2292 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Sep 12 17:03:46.311036 kubelet[2292]: E0912 17:03:46.311020 2292 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:03:46.312377 kubelet[2292]: I0912 17:03:46.312355 2292 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 17:03:46.313138 kubelet[2292]: I0912 17:03:46.313112 2292 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:03:46.313327 kubelet[2292]: W0912 17:03:46.313315 2292 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 17:03:46.314202 kubelet[2292]: I0912 17:03:46.314178 2292 server.go:1274] "Started kubelet" Sep 12 17:03:46.314775 kubelet[2292]: I0912 17:03:46.314748 2292 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:03:46.315066 kubelet[2292]: I0912 17:03:46.315019 2292 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:03:46.315913 kubelet[2292]: I0912 17:03:46.315744 2292 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:03:46.316579 kubelet[2292]: I0912 17:03:46.316558 2292 server.go:449] "Adding debug handlers to kubelet server" Sep 12 17:03:46.317393 kubelet[2292]: I0912 17:03:46.317375 2292 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:03:46.317936 kubelet[2292]: I0912 17:03:46.317875 2292 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:03:46.318593 kubelet[2292]: I0912 17:03:46.318568 2292 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 17:03:46.318988 kubelet[2292]: I0912 17:03:46.318969 2292 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 17:03:46.319163 kubelet[2292]: I0912 17:03:46.319150 2292 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:03:46.320980 kubelet[2292]: W0912 17:03:46.320937 2292 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Sep 12 17:03:46.321040 kubelet[2292]: E0912 17:03:46.321022 2292 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:03:46.321197 kubelet[2292]: I0912 17:03:46.321176 2292 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:03:46.321292 kubelet[2292]: I0912 17:03:46.321273 2292 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:03:46.321956 kubelet[2292]: E0912 17:03:46.321933 2292 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:03:46.323037 kubelet[2292]: E0912 17:03:46.323011 2292 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:03:46.323131 kubelet[2292]: E0912 17:03:46.323109 2292 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="200ms" Sep 12 17:03:46.323612 kubelet[2292]: I0912 17:03:46.323590 2292 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:03:46.323772 kubelet[2292]: E0912 17:03:46.318528 2292 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186497c4da5b9f5c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 17:03:46.314157916 +0000 UTC m=+0.981077529,LastTimestamp:2025-09-12 17:03:46.314157916 +0000 UTC m=+0.981077529,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 17:03:46.335931 kubelet[2292]: I0912 17:03:46.335751 2292 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 17:03:46.335931 kubelet[2292]: I0912 17:03:46.335770 2292 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 17:03:46.335931 kubelet[2292]: I0912 17:03:46.335786 2292 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:03:46.337588 kubelet[2292]: I0912 17:03:46.337437 2292 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:03:46.338472 kubelet[2292]: I0912 17:03:46.338455 2292 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:03:46.338551 kubelet[2292]: I0912 17:03:46.338541 2292 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 17:03:46.338619 kubelet[2292]: I0912 17:03:46.338609 2292 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 17:03:46.338993 kubelet[2292]: E0912 17:03:46.338960 2292 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:03:46.413581 kubelet[2292]: I0912 17:03:46.413536 2292 policy_none.go:49] "None policy: Start" Sep 12 17:03:46.414205 kubelet[2292]: W0912 17:03:46.414145 2292 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Sep 12 17:03:46.414273 kubelet[2292]: E0912 17:03:46.414206 2292 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:03:46.414564 kubelet[2292]: I0912 17:03:46.414550 2292 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 17:03:46.414602 kubelet[2292]: I0912 17:03:46.414590 2292 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:03:46.420137 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 17:03:46.423004 kubelet[2292]: E0912 17:03:46.422976 2292 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:03:46.431595 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 17:03:46.439588 kubelet[2292]: E0912 17:03:46.439551 2292 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 17:03:46.453923 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 17:03:46.455316 kubelet[2292]: I0912 17:03:46.455064 2292 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:03:46.455316 kubelet[2292]: I0912 17:03:46.455245 2292 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:03:46.455316 kubelet[2292]: I0912 17:03:46.455267 2292 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:03:46.455921 kubelet[2292]: I0912 17:03:46.455831 2292 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:03:46.456693 kubelet[2292]: E0912 17:03:46.456678 2292 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 12 17:03:46.524687 kubelet[2292]: E0912 17:03:46.523927 2292 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="400ms" Sep 12 17:03:46.557603 kubelet[2292]: I0912 17:03:46.557086 2292 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 17:03:46.557603 kubelet[2292]: E0912 17:03:46.557565 2292 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Sep 12 17:03:46.647009 systemd[1]: Created slice kubepods-burstable-podba1eabdfe97c8fea954b6bb2c90706fb.slice - libcontainer container kubepods-burstable-podba1eabdfe97c8fea954b6bb2c90706fb.slice. Sep 12 17:03:46.671270 systemd[1]: Created slice kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice - libcontainer container kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice. Sep 12 17:03:46.687635 systemd[1]: Created slice kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice - libcontainer container kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice. Sep 12 17:03:46.719971 kubelet[2292]: I0912 17:03:46.719935 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:03:46.720248 kubelet[2292]: I0912 17:03:46.720123 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:03:46.720248 kubelet[2292]: I0912 17:03:46.720157 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:03:46.720248 kubelet[2292]: I0912 17:03:46.720188 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 12 17:03:46.720248 kubelet[2292]: I0912 17:03:46.720203 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ba1eabdfe97c8fea954b6bb2c90706fb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ba1eabdfe97c8fea954b6bb2c90706fb\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:03:46.720248 kubelet[2292]: I0912 17:03:46.720232 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ba1eabdfe97c8fea954b6bb2c90706fb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ba1eabdfe97c8fea954b6bb2c90706fb\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:03:46.720475 kubelet[2292]: I0912 17:03:46.720421 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ba1eabdfe97c8fea954b6bb2c90706fb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ba1eabdfe97c8fea954b6bb2c90706fb\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:03:46.720475 kubelet[2292]: I0912 17:03:46.720447 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:03:46.720559 kubelet[2292]: I0912 17:03:46.720462 2292 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:03:46.758844 kubelet[2292]: I0912 17:03:46.758822 2292 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 17:03:46.759200 kubelet[2292]: E0912 17:03:46.759175 2292 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Sep 12 17:03:46.924812 kubelet[2292]: E0912 17:03:46.924768 2292 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="800ms" Sep 12 17:03:46.971684 containerd[1504]: time="2025-09-12T17:03:46.971614170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ba1eabdfe97c8fea954b6bb2c90706fb,Namespace:kube-system,Attempt:0,}" Sep 12 17:03:46.986530 containerd[1504]: time="2025-09-12T17:03:46.986485976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,}" Sep 12 17:03:46.994131 containerd[1504]: time="2025-09-12T17:03:46.993683585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,}" Sep 12 17:03:46.994945 containerd[1504]: time="2025-09-12T17:03:46.994901343Z" level=info msg="connecting to shim 36abc1d2ef2447a0c715cbf033012f8516e0060c1c5c0a8acceb3e95ea88ea02" address="unix:///run/containerd/s/9bc93d14f053712fb786287125cefed2f1bdb6ad397fd4d5d83237afded44d15" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:03:47.020848 systemd[1]: Started cri-containerd-36abc1d2ef2447a0c715cbf033012f8516e0060c1c5c0a8acceb3e95ea88ea02.scope - libcontainer container 36abc1d2ef2447a0c715cbf033012f8516e0060c1c5c0a8acceb3e95ea88ea02. Sep 12 17:03:47.026466 containerd[1504]: time="2025-09-12T17:03:47.026411702Z" level=info msg="connecting to shim a43e645f5a5c627d87b21535aaed1fa6c2bd60c54f723fc68781a5bf72c1c40b" address="unix:///run/containerd/s/0ff2c0dd9dd3de9ed4f238f2875fa5e6d3cb8741006b6c2c2ab5a7ab266b794a" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:03:47.035985 containerd[1504]: time="2025-09-12T17:03:47.035928810Z" level=info msg="connecting to shim 10b3156ba117b86836914d87689c61145261f188cded2830de5aa8369f83374c" address="unix:///run/containerd/s/79186a504fad99a0d3cd18132a2774dcd972fe0811bd6268a7318a20d2b91990" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:03:47.061842 systemd[1]: Started cri-containerd-10b3156ba117b86836914d87689c61145261f188cded2830de5aa8369f83374c.scope - libcontainer container 10b3156ba117b86836914d87689c61145261f188cded2830de5aa8369f83374c. Sep 12 17:03:47.065451 systemd[1]: Started cri-containerd-a43e645f5a5c627d87b21535aaed1fa6c2bd60c54f723fc68781a5bf72c1c40b.scope - libcontainer container a43e645f5a5c627d87b21535aaed1fa6c2bd60c54f723fc68781a5bf72c1c40b. Sep 12 17:03:47.085070 containerd[1504]: time="2025-09-12T17:03:47.083416846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ba1eabdfe97c8fea954b6bb2c90706fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"36abc1d2ef2447a0c715cbf033012f8516e0060c1c5c0a8acceb3e95ea88ea02\"" Sep 12 17:03:47.087389 containerd[1504]: time="2025-09-12T17:03:47.087356916Z" level=info msg="CreateContainer within sandbox \"36abc1d2ef2447a0c715cbf033012f8516e0060c1c5c0a8acceb3e95ea88ea02\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 17:03:47.120016 containerd[1504]: time="2025-09-12T17:03:47.119952048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"10b3156ba117b86836914d87689c61145261f188cded2830de5aa8369f83374c\"" Sep 12 17:03:47.122146 containerd[1504]: time="2025-09-12T17:03:47.122115602Z" level=info msg="CreateContainer within sandbox \"10b3156ba117b86836914d87689c61145261f188cded2830de5aa8369f83374c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 17:03:47.131168 containerd[1504]: time="2025-09-12T17:03:47.131133809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"a43e645f5a5c627d87b21535aaed1fa6c2bd60c54f723fc68781a5bf72c1c40b\"" Sep 12 17:03:47.140381 containerd[1504]: time="2025-09-12T17:03:47.140335453Z" level=info msg="Container dc795f66df3c1c8340179734a0d40cab13d55bc007708cbb324f403f8b50e0c1: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:03:47.140517 containerd[1504]: time="2025-09-12T17:03:47.140350536Z" level=info msg="CreateContainer within sandbox \"a43e645f5a5c627d87b21535aaed1fa6c2bd60c54f723fc68781a5bf72c1c40b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 17:03:47.149463 containerd[1504]: time="2025-09-12T17:03:47.149402470Z" level=info msg="Container a1df1fea0c3b9ce7b5a7b9daa387107601deddb1c778ffe64e6c3346262046af: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:03:47.151054 containerd[1504]: time="2025-09-12T17:03:47.151019194Z" level=info msg="Container 185d1fb7c4f9f53c214b45b53841a70f5ba619759cdfc963772eb7c61de599ef: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:03:47.154775 containerd[1504]: time="2025-09-12T17:03:47.154743060Z" level=info msg="CreateContainer within sandbox \"36abc1d2ef2447a0c715cbf033012f8516e0060c1c5c0a8acceb3e95ea88ea02\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"dc795f66df3c1c8340179734a0d40cab13d55bc007708cbb324f403f8b50e0c1\"" Sep 12 17:03:47.155289 containerd[1504]: time="2025-09-12T17:03:47.155265205Z" level=info msg="StartContainer for \"dc795f66df3c1c8340179734a0d40cab13d55bc007708cbb324f403f8b50e0c1\"" Sep 12 17:03:47.156330 containerd[1504]: time="2025-09-12T17:03:47.156301173Z" level=info msg="connecting to shim dc795f66df3c1c8340179734a0d40cab13d55bc007708cbb324f403f8b50e0c1" address="unix:///run/containerd/s/9bc93d14f053712fb786287125cefed2f1bdb6ad397fd4d5d83237afded44d15" protocol=ttrpc version=3 Sep 12 17:03:47.158236 containerd[1504]: time="2025-09-12T17:03:47.158204874Z" level=info msg="CreateContainer within sandbox \"10b3156ba117b86836914d87689c61145261f188cded2830de5aa8369f83374c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a1df1fea0c3b9ce7b5a7b9daa387107601deddb1c778ffe64e6c3346262046af\"" Sep 12 17:03:47.158724 containerd[1504]: time="2025-09-12T17:03:47.158701014Z" level=info msg="StartContainer for \"a1df1fea0c3b9ce7b5a7b9daa387107601deddb1c778ffe64e6c3346262046af\"" Sep 12 17:03:47.161226 kubelet[2292]: I0912 17:03:47.161186 2292 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 17:03:47.161417 containerd[1504]: time="2025-09-12T17:03:47.161212917Z" level=info msg="connecting to shim a1df1fea0c3b9ce7b5a7b9daa387107601deddb1c778ffe64e6c3346262046af" address="unix:///run/containerd/s/79186a504fad99a0d3cd18132a2774dcd972fe0811bd6268a7318a20d2b91990" protocol=ttrpc version=3 Sep 12 17:03:47.161706 kubelet[2292]: E0912 17:03:47.161672 2292 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Sep 12 17:03:47.162269 containerd[1504]: time="2025-09-12T17:03:47.162242683Z" level=info msg="CreateContainer within sandbox \"a43e645f5a5c627d87b21535aaed1fa6c2bd60c54f723fc68781a5bf72c1c40b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"185d1fb7c4f9f53c214b45b53841a70f5ba619759cdfc963772eb7c61de599ef\"" Sep 12 17:03:47.162734 containerd[1504]: time="2025-09-12T17:03:47.162708257Z" level=info msg="StartContainer for \"185d1fb7c4f9f53c214b45b53841a70f5ba619759cdfc963772eb7c61de599ef\"" Sep 12 17:03:47.163863 containerd[1504]: time="2025-09-12T17:03:47.163815719Z" level=info msg="connecting to shim 185d1fb7c4f9f53c214b45b53841a70f5ba619759cdfc963772eb7c61de599ef" address="unix:///run/containerd/s/0ff2c0dd9dd3de9ed4f238f2875fa5e6d3cb8741006b6c2c2ab5a7ab266b794a" protocol=ttrpc version=3 Sep 12 17:03:47.175842 systemd[1]: Started cri-containerd-dc795f66df3c1c8340179734a0d40cab13d55bc007708cbb324f403f8b50e0c1.scope - libcontainer container dc795f66df3c1c8340179734a0d40cab13d55bc007708cbb324f403f8b50e0c1. Sep 12 17:03:47.176284 kubelet[2292]: W0912 17:03:47.176126 2292 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Sep 12 17:03:47.176284 kubelet[2292]: E0912 17:03:47.176186 2292 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:03:47.183804 systemd[1]: Started cri-containerd-185d1fb7c4f9f53c214b45b53841a70f5ba619759cdfc963772eb7c61de599ef.scope - libcontainer container 185d1fb7c4f9f53c214b45b53841a70f5ba619759cdfc963772eb7c61de599ef. Sep 12 17:03:47.187310 systemd[1]: Started cri-containerd-a1df1fea0c3b9ce7b5a7b9daa387107601deddb1c778ffe64e6c3346262046af.scope - libcontainer container a1df1fea0c3b9ce7b5a7b9daa387107601deddb1c778ffe64e6c3346262046af. Sep 12 17:03:47.222795 containerd[1504]: time="2025-09-12T17:03:47.222750569Z" level=info msg="StartContainer for \"dc795f66df3c1c8340179734a0d40cab13d55bc007708cbb324f403f8b50e0c1\" returns successfully" Sep 12 17:03:47.231263 containerd[1504]: time="2025-09-12T17:03:47.231101003Z" level=info msg="StartContainer for \"185d1fb7c4f9f53c214b45b53841a70f5ba619759cdfc963772eb7c61de599ef\" returns successfully" Sep 12 17:03:47.241793 containerd[1504]: time="2025-09-12T17:03:47.241726092Z" level=info msg="StartContainer for \"a1df1fea0c3b9ce7b5a7b9daa387107601deddb1c778ffe64e6c3346262046af\" returns successfully" Sep 12 17:03:47.258743 kubelet[2292]: W0912 17:03:47.258673 2292 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Sep 12 17:03:47.258856 kubelet[2292]: E0912 17:03:47.258749 2292 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:03:47.964614 kubelet[2292]: I0912 17:03:47.964035 2292 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 17:03:49.004067 kubelet[2292]: E0912 17:03:49.004014 2292 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 12 17:03:49.078795 kubelet[2292]: I0912 17:03:49.078743 2292 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 12 17:03:49.078795 kubelet[2292]: E0912 17:03:49.078794 2292 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 12 17:03:49.310478 kubelet[2292]: I0912 17:03:49.310435 2292 apiserver.go:52] "Watching apiserver" Sep 12 17:03:49.320058 kubelet[2292]: I0912 17:03:49.320005 2292 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 17:03:51.271256 systemd[1]: Reload requested from client PID 2569 ('systemctl') (unit session-7.scope)... Sep 12 17:03:51.271584 systemd[1]: Reloading... Sep 12 17:03:51.347711 zram_generator::config[2614]: No configuration found. Sep 12 17:03:51.518710 systemd[1]: Reloading finished in 246 ms. Sep 12 17:03:51.541129 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:03:51.556000 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:03:51.556258 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:03:51.556317 systemd[1]: kubelet.service: Consumed 1.364s CPU time, 129.2M memory peak. Sep 12 17:03:51.558818 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:03:51.688536 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:03:51.698014 (kubelet)[2654]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:03:51.765402 kubelet[2654]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:03:51.765717 kubelet[2654]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 17:03:51.765717 kubelet[2654]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:03:51.765717 kubelet[2654]: I0912 17:03:51.765539 2654 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:03:51.773027 kubelet[2654]: I0912 17:03:51.772953 2654 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 17:03:51.773027 kubelet[2654]: I0912 17:03:51.772986 2654 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:03:51.774131 kubelet[2654]: I0912 17:03:51.773567 2654 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 17:03:51.775476 kubelet[2654]: I0912 17:03:51.775447 2654 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 17:03:51.777576 kubelet[2654]: I0912 17:03:51.777527 2654 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:03:51.783523 kubelet[2654]: I0912 17:03:51.783198 2654 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 17:03:51.788148 kubelet[2654]: I0912 17:03:51.787970 2654 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:03:51.788489 kubelet[2654]: I0912 17:03:51.788467 2654 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 17:03:51.788718 kubelet[2654]: I0912 17:03:51.788648 2654 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:03:51.789563 kubelet[2654]: I0912 17:03:51.788684 2654 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:03:51.789563 kubelet[2654]: I0912 17:03:51.789194 2654 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:03:51.789563 kubelet[2654]: I0912 17:03:51.789205 2654 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 17:03:51.789563 kubelet[2654]: I0912 17:03:51.789259 2654 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:03:51.789563 kubelet[2654]: I0912 17:03:51.789398 2654 kubelet.go:408] "Attempting to sync node with API server" Sep 12 17:03:51.789778 kubelet[2654]: I0912 17:03:51.789411 2654 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:03:51.789778 kubelet[2654]: I0912 17:03:51.789430 2654 kubelet.go:314] "Adding apiserver pod source" Sep 12 17:03:51.789778 kubelet[2654]: I0912 17:03:51.789443 2654 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:03:51.796222 kubelet[2654]: I0912 17:03:51.793684 2654 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 17:03:51.796222 kubelet[2654]: I0912 17:03:51.794238 2654 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:03:51.796222 kubelet[2654]: I0912 17:03:51.796029 2654 server.go:1274] "Started kubelet" Sep 12 17:03:51.799678 kubelet[2654]: I0912 17:03:51.797803 2654 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:03:51.800403 kubelet[2654]: I0912 17:03:51.800343 2654 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:03:51.800635 kubelet[2654]: I0912 17:03:51.800611 2654 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:03:51.802326 kubelet[2654]: I0912 17:03:51.802284 2654 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:03:51.806296 kubelet[2654]: I0912 17:03:51.805778 2654 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:03:51.806437 kubelet[2654]: I0912 17:03:51.806379 2654 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 17:03:51.806530 kubelet[2654]: E0912 17:03:51.806508 2654 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:03:51.808906 kubelet[2654]: I0912 17:03:51.808519 2654 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 17:03:51.808906 kubelet[2654]: I0912 17:03:51.808676 2654 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:03:51.811165 kubelet[2654]: I0912 17:03:51.811138 2654 server.go:449] "Adding debug handlers to kubelet server" Sep 12 17:03:51.819571 kubelet[2654]: I0912 17:03:51.817115 2654 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:03:51.819571 kubelet[2654]: I0912 17:03:51.817138 2654 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:03:51.819571 kubelet[2654]: I0912 17:03:51.817227 2654 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:03:51.820165 kubelet[2654]: E0912 17:03:51.820064 2654 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:03:51.825618 kubelet[2654]: I0912 17:03:51.825576 2654 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:03:51.828482 kubelet[2654]: I0912 17:03:51.828425 2654 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:03:51.828482 kubelet[2654]: I0912 17:03:51.828467 2654 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 17:03:51.828482 kubelet[2654]: I0912 17:03:51.828485 2654 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 17:03:51.828613 kubelet[2654]: E0912 17:03:51.828527 2654 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:03:51.881527 kubelet[2654]: I0912 17:03:51.881483 2654 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 17:03:51.881527 kubelet[2654]: I0912 17:03:51.881508 2654 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 17:03:51.881527 kubelet[2654]: I0912 17:03:51.881532 2654 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:03:51.881742 kubelet[2654]: I0912 17:03:51.881732 2654 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 17:03:51.881765 kubelet[2654]: I0912 17:03:51.881744 2654 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 17:03:51.881765 kubelet[2654]: I0912 17:03:51.881764 2654 policy_none.go:49] "None policy: Start" Sep 12 17:03:51.882545 kubelet[2654]: I0912 17:03:51.882520 2654 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 17:03:51.882545 kubelet[2654]: I0912 17:03:51.882545 2654 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:03:51.882748 kubelet[2654]: I0912 17:03:51.882722 2654 state_mem.go:75] "Updated machine memory state" Sep 12 17:03:51.887589 kubelet[2654]: I0912 17:03:51.887558 2654 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:03:51.887776 kubelet[2654]: I0912 17:03:51.887752 2654 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:03:51.887855 kubelet[2654]: I0912 17:03:51.887816 2654 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:03:51.888330 kubelet[2654]: I0912 17:03:51.888138 2654 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:03:51.937790 kubelet[2654]: E0912 17:03:51.937743 2654 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 12 17:03:51.991161 kubelet[2654]: I0912 17:03:51.991129 2654 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 17:03:51.998275 kubelet[2654]: I0912 17:03:51.997971 2654 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 12 17:03:51.998275 kubelet[2654]: I0912 17:03:51.998105 2654 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 12 17:03:52.109485 kubelet[2654]: I0912 17:03:52.109373 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ba1eabdfe97c8fea954b6bb2c90706fb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ba1eabdfe97c8fea954b6bb2c90706fb\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:03:52.110570 kubelet[2654]: I0912 17:03:52.109776 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:03:52.110570 kubelet[2654]: I0912 17:03:52.109807 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:03:52.110570 kubelet[2654]: I0912 17:03:52.109824 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:03:52.110570 kubelet[2654]: I0912 17:03:52.109844 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ba1eabdfe97c8fea954b6bb2c90706fb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ba1eabdfe97c8fea954b6bb2c90706fb\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:03:52.110570 kubelet[2654]: I0912 17:03:52.109861 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:03:52.110941 kubelet[2654]: I0912 17:03:52.109877 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:03:52.110941 kubelet[2654]: I0912 17:03:52.109895 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 12 17:03:52.110941 kubelet[2654]: I0912 17:03:52.109929 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ba1eabdfe97c8fea954b6bb2c90706fb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ba1eabdfe97c8fea954b6bb2c90706fb\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:03:52.331300 sudo[2690]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 12 17:03:52.331571 sudo[2690]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 12 17:03:52.659513 sudo[2690]: pam_unix(sudo:session): session closed for user root Sep 12 17:03:52.791202 kubelet[2654]: I0912 17:03:52.791141 2654 apiserver.go:52] "Watching apiserver" Sep 12 17:03:52.809397 kubelet[2654]: I0912 17:03:52.809347 2654 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 17:03:52.875242 kubelet[2654]: E0912 17:03:52.872326 2654 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 12 17:03:52.896123 kubelet[2654]: I0912 17:03:52.896058 2654 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.896040496 podStartE2EDuration="1.896040496s" podCreationTimestamp="2025-09-12 17:03:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:03:52.895508848 +0000 UTC m=+1.190330456" watchObservedRunningTime="2025-09-12 17:03:52.896040496 +0000 UTC m=+1.190862064" Sep 12 17:03:52.896388 kubelet[2654]: I0912 17:03:52.896172 2654 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.896167947 podStartE2EDuration="1.896167947s" podCreationTimestamp="2025-09-12 17:03:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:03:52.885901027 +0000 UTC m=+1.180722635" watchObservedRunningTime="2025-09-12 17:03:52.896167947 +0000 UTC m=+1.190989515" Sep 12 17:03:52.917465 kubelet[2654]: I0912 17:03:52.917325 2654 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.917307601 podStartE2EDuration="2.917307601s" podCreationTimestamp="2025-09-12 17:03:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:03:52.903242461 +0000 UTC m=+1.198064109" watchObservedRunningTime="2025-09-12 17:03:52.917307601 +0000 UTC m=+1.212129209" Sep 12 17:03:54.378175 sudo[1719]: pam_unix(sudo:session): session closed for user root Sep 12 17:03:54.379622 sshd[1718]: Connection closed by 10.0.0.1 port 35036 Sep 12 17:03:54.381244 sshd-session[1715]: pam_unix(sshd:session): session closed for user core Sep 12 17:03:54.384683 systemd[1]: sshd@6-10.0.0.14:22-10.0.0.1:35036.service: Deactivated successfully. Sep 12 17:03:54.388359 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 17:03:54.388582 systemd[1]: session-7.scope: Consumed 8.004s CPU time, 259.2M memory peak. Sep 12 17:03:54.389476 systemd-logind[1486]: Session 7 logged out. Waiting for processes to exit. Sep 12 17:03:54.390484 systemd-logind[1486]: Removed session 7. Sep 12 17:03:56.523327 kubelet[2654]: I0912 17:03:56.523057 2654 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 17:03:56.524319 kubelet[2654]: I0912 17:03:56.523694 2654 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 17:03:56.524348 containerd[1504]: time="2025-09-12T17:03:56.523407693Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 17:03:57.456292 kubelet[2654]: W0912 17:03:57.456255 2654 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Sep 12 17:03:57.456663 kubelet[2654]: E0912 17:03:57.456299 2654 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 12 17:03:57.459796 systemd[1]: Created slice kubepods-besteffort-podaee76af2_78ce_4a0a_a8e8_614f7f52f3af.slice - libcontainer container kubepods-besteffort-podaee76af2_78ce_4a0a_a8e8_614f7f52f3af.slice. Sep 12 17:03:57.472857 systemd[1]: Created slice kubepods-besteffort-pod54afb15d_0fb0_4b39_ad6e_ee5e5fded456.slice - libcontainer container kubepods-besteffort-pod54afb15d_0fb0_4b39_ad6e_ee5e5fded456.slice. Sep 12 17:03:57.503562 systemd[1]: Created slice kubepods-burstable-pod4cbc3c74_cbc1_4824_b63f_cf5d5bfe09c4.slice - libcontainer container kubepods-burstable-pod4cbc3c74_cbc1_4824_b63f_cf5d5bfe09c4.slice. Sep 12 17:03:57.541228 kubelet[2654]: I0912 17:03:57.541183 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/aee76af2-78ce-4a0a-a8e8-614f7f52f3af-kube-proxy\") pod \"kube-proxy-z6tb5\" (UID: \"aee76af2-78ce-4a0a-a8e8-614f7f52f3af\") " pod="kube-system/kube-proxy-z6tb5" Sep 12 17:03:57.541228 kubelet[2654]: I0912 17:03:57.541228 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aee76af2-78ce-4a0a-a8e8-614f7f52f3af-xtables-lock\") pod \"kube-proxy-z6tb5\" (UID: \"aee76af2-78ce-4a0a-a8e8-614f7f52f3af\") " pod="kube-system/kube-proxy-z6tb5" Sep 12 17:03:57.541680 kubelet[2654]: I0912 17:03:57.541245 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aee76af2-78ce-4a0a-a8e8-614f7f52f3af-lib-modules\") pod \"kube-proxy-z6tb5\" (UID: \"aee76af2-78ce-4a0a-a8e8-614f7f52f3af\") " pod="kube-system/kube-proxy-z6tb5" Sep 12 17:03:57.541680 kubelet[2654]: I0912 17:03:57.541271 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4krlv\" (UniqueName: \"kubernetes.io/projected/aee76af2-78ce-4a0a-a8e8-614f7f52f3af-kube-api-access-4krlv\") pod \"kube-proxy-z6tb5\" (UID: \"aee76af2-78ce-4a0a-a8e8-614f7f52f3af\") " pod="kube-system/kube-proxy-z6tb5" Sep 12 17:03:57.642035 kubelet[2654]: I0912 17:03:57.641986 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4-xtables-lock\") pod \"cilium-86w52\" (UID: \"4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4\") " pod="kube-system/cilium-86w52" Sep 12 17:03:57.642035 kubelet[2654]: I0912 17:03:57.642033 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4-lib-modules\") pod \"cilium-86w52\" (UID: \"4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4\") " pod="kube-system/cilium-86w52" Sep 12 17:03:57.642035 kubelet[2654]: I0912 17:03:57.642051 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4-hubble-tls\") pod \"cilium-86w52\" (UID: \"4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4\") " pod="kube-system/cilium-86w52" Sep 12 17:03:57.642222 kubelet[2654]: I0912 17:03:57.642068 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4-bpf-maps\") pod \"cilium-86w52\" (UID: \"4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4\") " pod="kube-system/cilium-86w52" Sep 12 17:03:57.642222 kubelet[2654]: I0912 17:03:57.642083 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4-hostproc\") pod \"cilium-86w52\" (UID: \"4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4\") " pod="kube-system/cilium-86w52" Sep 12 17:03:57.642222 kubelet[2654]: I0912 17:03:57.642111 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/54afb15d-0fb0-4b39-ad6e-ee5e5fded456-cilium-config-path\") pod \"cilium-operator-5d85765b45-6wgl5\" (UID: \"54afb15d-0fb0-4b39-ad6e-ee5e5fded456\") " pod="kube-system/cilium-operator-5d85765b45-6wgl5" Sep 12 17:03:57.642222 kubelet[2654]: I0912 17:03:57.642130 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drlsr\" (UniqueName: \"kubernetes.io/projected/54afb15d-0fb0-4b39-ad6e-ee5e5fded456-kube-api-access-drlsr\") pod \"cilium-operator-5d85765b45-6wgl5\" (UID: \"54afb15d-0fb0-4b39-ad6e-ee5e5fded456\") " pod="kube-system/cilium-operator-5d85765b45-6wgl5" Sep 12 17:03:57.642222 kubelet[2654]: I0912 17:03:57.642144 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4-cilium-cgroup\") pod \"cilium-86w52\" (UID: \"4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4\") " pod="kube-system/cilium-86w52" Sep 12 17:03:57.642316 kubelet[2654]: I0912 17:03:57.642158 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4-host-proc-sys-net\") pod \"cilium-86w52\" (UID: \"4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4\") " pod="kube-system/cilium-86w52" Sep 12 17:03:57.642316 kubelet[2654]: I0912 17:03:57.642184 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xqqj\" (UniqueName: \"kubernetes.io/projected/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4-kube-api-access-9xqqj\") pod \"cilium-86w52\" (UID: \"4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4\") " pod="kube-system/cilium-86w52" Sep 12 17:03:57.642316 kubelet[2654]: I0912 17:03:57.642199 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4-cilium-run\") pod \"cilium-86w52\" (UID: \"4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4\") " pod="kube-system/cilium-86w52" Sep 12 17:03:57.642316 kubelet[2654]: I0912 17:03:57.642213 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4-cni-path\") pod \"cilium-86w52\" (UID: \"4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4\") " pod="kube-system/cilium-86w52" Sep 12 17:03:57.642316 kubelet[2654]: I0912 17:03:57.642229 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4-cilium-config-path\") pod \"cilium-86w52\" (UID: \"4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4\") " pod="kube-system/cilium-86w52" Sep 12 17:03:57.642316 kubelet[2654]: I0912 17:03:57.642256 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4-etc-cni-netd\") pod \"cilium-86w52\" (UID: \"4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4\") " pod="kube-system/cilium-86w52" Sep 12 17:03:57.642431 kubelet[2654]: I0912 17:03:57.642272 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4-host-proc-sys-kernel\") pod \"cilium-86w52\" (UID: \"4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4\") " pod="kube-system/cilium-86w52" Sep 12 17:03:57.642431 kubelet[2654]: I0912 17:03:57.642297 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4-clustermesh-secrets\") pod \"cilium-86w52\" (UID: \"4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4\") " pod="kube-system/cilium-86w52" Sep 12 17:03:57.773057 containerd[1504]: time="2025-09-12T17:03:57.772465843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z6tb5,Uid:aee76af2-78ce-4a0a-a8e8-614f7f52f3af,Namespace:kube-system,Attempt:0,}" Sep 12 17:03:57.787818 containerd[1504]: time="2025-09-12T17:03:57.787774921Z" level=info msg="connecting to shim 64aec76cbe1ca560d225e64425b66ed8d14fdaefd2dbc96fb46972ea9807f141" address="unix:///run/containerd/s/57c916f7d11f92df64a1c6f21772b9a6e3edbd402d33c16d7db47a3455f8e274" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:03:57.809832 systemd[1]: Started cri-containerd-64aec76cbe1ca560d225e64425b66ed8d14fdaefd2dbc96fb46972ea9807f141.scope - libcontainer container 64aec76cbe1ca560d225e64425b66ed8d14fdaefd2dbc96fb46972ea9807f141. Sep 12 17:03:57.832982 containerd[1504]: time="2025-09-12T17:03:57.832944303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z6tb5,Uid:aee76af2-78ce-4a0a-a8e8-614f7f52f3af,Namespace:kube-system,Attempt:0,} returns sandbox id \"64aec76cbe1ca560d225e64425b66ed8d14fdaefd2dbc96fb46972ea9807f141\"" Sep 12 17:03:57.835825 containerd[1504]: time="2025-09-12T17:03:57.835796576Z" level=info msg="CreateContainer within sandbox \"64aec76cbe1ca560d225e64425b66ed8d14fdaefd2dbc96fb46972ea9807f141\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 17:03:57.844257 containerd[1504]: time="2025-09-12T17:03:57.844207067Z" level=info msg="Container f1c50e1fb8418881866168664c159f3d7962d79f0a4239eb959c938658b042e3: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:03:57.853509 containerd[1504]: time="2025-09-12T17:03:57.853450053Z" level=info msg="CreateContainer within sandbox \"64aec76cbe1ca560d225e64425b66ed8d14fdaefd2dbc96fb46972ea9807f141\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f1c50e1fb8418881866168664c159f3d7962d79f0a4239eb959c938658b042e3\"" Sep 12 17:03:57.854044 containerd[1504]: time="2025-09-12T17:03:57.854020732Z" level=info msg="StartContainer for \"f1c50e1fb8418881866168664c159f3d7962d79f0a4239eb959c938658b042e3\"" Sep 12 17:03:57.855471 containerd[1504]: time="2025-09-12T17:03:57.855438268Z" level=info msg="connecting to shim f1c50e1fb8418881866168664c159f3d7962d79f0a4239eb959c938658b042e3" address="unix:///run/containerd/s/57c916f7d11f92df64a1c6f21772b9a6e3edbd402d33c16d7db47a3455f8e274" protocol=ttrpc version=3 Sep 12 17:03:57.878804 systemd[1]: Started cri-containerd-f1c50e1fb8418881866168664c159f3d7962d79f0a4239eb959c938658b042e3.scope - libcontainer container f1c50e1fb8418881866168664c159f3d7962d79f0a4239eb959c938658b042e3. Sep 12 17:03:57.913893 containerd[1504]: time="2025-09-12T17:03:57.913762702Z" level=info msg="StartContainer for \"f1c50e1fb8418881866168664c159f3d7962d79f0a4239eb959c938658b042e3\" returns successfully" Sep 12 17:03:58.744902 kubelet[2654]: E0912 17:03:58.744793 2654 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Sep 12 17:03:58.744902 kubelet[2654]: E0912 17:03:58.744907 2654 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/54afb15d-0fb0-4b39-ad6e-ee5e5fded456-cilium-config-path podName:54afb15d-0fb0-4b39-ad6e-ee5e5fded456 nodeName:}" failed. No retries permitted until 2025-09-12 17:03:59.244878272 +0000 UTC m=+7.539699880 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/54afb15d-0fb0-4b39-ad6e-ee5e5fded456-cilium-config-path") pod "cilium-operator-5d85765b45-6wgl5" (UID: "54afb15d-0fb0-4b39-ad6e-ee5e5fded456") : failed to sync configmap cache: timed out waiting for the condition Sep 12 17:03:58.745497 kubelet[2654]: E0912 17:03:58.745445 2654 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Sep 12 17:03:58.745539 kubelet[2654]: E0912 17:03:58.745521 2654 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4-cilium-config-path podName:4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4 nodeName:}" failed. No retries permitted until 2025-09-12 17:03:59.245503912 +0000 UTC m=+7.540325520 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4-cilium-config-path") pod "cilium-86w52" (UID: "4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4") : failed to sync configmap cache: timed out waiting for the condition Sep 12 17:03:58.888189 kubelet[2654]: I0912 17:03:58.888131 2654 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-z6tb5" podStartSLOduration=1.88809987 podStartE2EDuration="1.88809987s" podCreationTimestamp="2025-09-12 17:03:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:03:58.88809619 +0000 UTC m=+7.182917798" watchObservedRunningTime="2025-09-12 17:03:58.88809987 +0000 UTC m=+7.182921478" Sep 12 17:03:59.275926 containerd[1504]: time="2025-09-12T17:03:59.275878734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-6wgl5,Uid:54afb15d-0fb0-4b39-ad6e-ee5e5fded456,Namespace:kube-system,Attempt:0,}" Sep 12 17:03:59.295592 containerd[1504]: time="2025-09-12T17:03:59.295553611Z" level=info msg="connecting to shim 657c36cf6aac9a41a1d12130c8e5d82d9c52a6974bae08bcdf06d9808e1728b1" address="unix:///run/containerd/s/43ef6af3b53b2c1d144cf64290b2c899aa79115473a629efe29e98902a7c2780" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:03:59.306625 containerd[1504]: time="2025-09-12T17:03:59.306582603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-86w52,Uid:4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4,Namespace:kube-system,Attempt:0,}" Sep 12 17:03:59.331448 systemd[1]: Started cri-containerd-657c36cf6aac9a41a1d12130c8e5d82d9c52a6974bae08bcdf06d9808e1728b1.scope - libcontainer container 657c36cf6aac9a41a1d12130c8e5d82d9c52a6974bae08bcdf06d9808e1728b1. Sep 12 17:03:59.332621 containerd[1504]: time="2025-09-12T17:03:59.332294168Z" level=info msg="connecting to shim ec12b42ea0750d4030cf8e5de6af1c02af452d9c541f9192f6e0f6ef30e54278" address="unix:///run/containerd/s/2b4e8a9a20baaa81fca4f081c576cc2316b587668d1d261a32825a44a12225a6" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:03:59.367526 systemd[1]: Started cri-containerd-ec12b42ea0750d4030cf8e5de6af1c02af452d9c541f9192f6e0f6ef30e54278.scope - libcontainer container ec12b42ea0750d4030cf8e5de6af1c02af452d9c541f9192f6e0f6ef30e54278. Sep 12 17:03:59.383755 containerd[1504]: time="2025-09-12T17:03:59.383712017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-6wgl5,Uid:54afb15d-0fb0-4b39-ad6e-ee5e5fded456,Namespace:kube-system,Attempt:0,} returns sandbox id \"657c36cf6aac9a41a1d12130c8e5d82d9c52a6974bae08bcdf06d9808e1728b1\"" Sep 12 17:03:59.385751 containerd[1504]: time="2025-09-12T17:03:59.385324115Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 12 17:03:59.402056 containerd[1504]: time="2025-09-12T17:03:59.401912005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-86w52,Uid:4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec12b42ea0750d4030cf8e5de6af1c02af452d9c541f9192f6e0f6ef30e54278\"" Sep 12 17:04:01.068019 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2019821341.mount: Deactivated successfully. Sep 12 17:04:01.343325 containerd[1504]: time="2025-09-12T17:04:01.342513819Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:04:01.343325 containerd[1504]: time="2025-09-12T17:04:01.343015967Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 12 17:04:01.344159 containerd[1504]: time="2025-09-12T17:04:01.344129188Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:04:01.346220 containerd[1504]: time="2025-09-12T17:04:01.346113136Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.960333233s" Sep 12 17:04:01.346220 containerd[1504]: time="2025-09-12T17:04:01.346149538Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 12 17:04:01.346873 containerd[1504]: time="2025-09-12T17:04:01.346843136Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 12 17:04:01.349906 containerd[1504]: time="2025-09-12T17:04:01.349873302Z" level=info msg="CreateContainer within sandbox \"657c36cf6aac9a41a1d12130c8e5d82d9c52a6974bae08bcdf06d9808e1728b1\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 12 17:04:01.361671 containerd[1504]: time="2025-09-12T17:04:01.359874290Z" level=info msg="Container 80cfb91ada6d693be8e743a5464457d2281bc2188cf519db9fed2b924962ee0d: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:04:01.390334 containerd[1504]: time="2025-09-12T17:04:01.390282476Z" level=info msg="CreateContainer within sandbox \"657c36cf6aac9a41a1d12130c8e5d82d9c52a6974bae08bcdf06d9808e1728b1\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"80cfb91ada6d693be8e743a5464457d2281bc2188cf519db9fed2b924962ee0d\"" Sep 12 17:04:01.391029 containerd[1504]: time="2025-09-12T17:04:01.390964433Z" level=info msg="StartContainer for \"80cfb91ada6d693be8e743a5464457d2281bc2188cf519db9fed2b924962ee0d\"" Sep 12 17:04:01.392029 containerd[1504]: time="2025-09-12T17:04:01.392001290Z" level=info msg="connecting to shim 80cfb91ada6d693be8e743a5464457d2281bc2188cf519db9fed2b924962ee0d" address="unix:///run/containerd/s/43ef6af3b53b2c1d144cf64290b2c899aa79115473a629efe29e98902a7c2780" protocol=ttrpc version=3 Sep 12 17:04:01.420902 systemd[1]: Started cri-containerd-80cfb91ada6d693be8e743a5464457d2281bc2188cf519db9fed2b924962ee0d.scope - libcontainer container 80cfb91ada6d693be8e743a5464457d2281bc2188cf519db9fed2b924962ee0d. Sep 12 17:04:01.453041 containerd[1504]: time="2025-09-12T17:04:01.452993110Z" level=info msg="StartContainer for \"80cfb91ada6d693be8e743a5464457d2281bc2188cf519db9fed2b924962ee0d\" returns successfully" Sep 12 17:04:01.903507 kubelet[2654]: I0912 17:04:01.903445 2654 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-6wgl5" podStartSLOduration=2.941619823 podStartE2EDuration="4.903425142s" podCreationTimestamp="2025-09-12 17:03:57 +0000 UTC" firstStartedPulling="2025-09-12 17:03:59.384918531 +0000 UTC m=+7.679740099" lastFinishedPulling="2025-09-12 17:04:01.34672385 +0000 UTC m=+9.641545418" observedRunningTime="2025-09-12 17:04:01.899967273 +0000 UTC m=+10.194788881" watchObservedRunningTime="2025-09-12 17:04:01.903425142 +0000 UTC m=+10.198246750" Sep 12 17:04:04.538944 update_engine[1491]: I20250912 17:04:04.538673 1491 update_attempter.cc:509] Updating boot flags... Sep 12 17:04:19.996779 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2777854116.mount: Deactivated successfully. Sep 12 17:04:20.774729 systemd[1]: Started sshd@7-10.0.0.14:22-10.0.0.1:38960.service - OpenSSH per-connection server daemon (10.0.0.1:38960). Sep 12 17:04:20.840763 sshd[3130]: Accepted publickey for core from 10.0.0.1 port 38960 ssh2: RSA SHA256:zGEfOPQeThQpVzN9sa62qLAN/hzBbgDs44dXJ/syq0Y Sep 12 17:04:20.842966 sshd-session[3130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:04:20.850770 systemd-logind[1486]: New session 8 of user core. Sep 12 17:04:20.855783 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 17:04:21.017145 sshd[3133]: Connection closed by 10.0.0.1 port 38960 Sep 12 17:04:21.017465 sshd-session[3130]: pam_unix(sshd:session): session closed for user core Sep 12 17:04:21.021470 systemd-logind[1486]: Session 8 logged out. Waiting for processes to exit. Sep 12 17:04:21.021847 systemd[1]: sshd@7-10.0.0.14:22-10.0.0.1:38960.service: Deactivated successfully. Sep 12 17:04:21.025701 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 17:04:21.027682 systemd-logind[1486]: Removed session 8. Sep 12 17:04:21.511340 containerd[1504]: time="2025-09-12T17:04:21.511279863Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:04:21.512374 containerd[1504]: time="2025-09-12T17:04:21.512343848Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 12 17:04:21.513213 containerd[1504]: time="2025-09-12T17:04:21.513194707Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:04:21.514907 containerd[1504]: time="2025-09-12T17:04:21.514880345Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 20.168006087s" Sep 12 17:04:21.514955 containerd[1504]: time="2025-09-12T17:04:21.514913826Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 12 17:04:21.518029 containerd[1504]: time="2025-09-12T17:04:21.517761051Z" level=info msg="CreateContainer within sandbox \"ec12b42ea0750d4030cf8e5de6af1c02af452d9c541f9192f6e0f6ef30e54278\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 17:04:21.528847 containerd[1504]: time="2025-09-12T17:04:21.528810781Z" level=info msg="Container cb76c436ef976881afbdab377e9e3c15261103b92f85fa1506cac3fc18d4e34d: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:04:21.530026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3835009746.mount: Deactivated successfully. Sep 12 17:04:21.533498 containerd[1504]: time="2025-09-12T17:04:21.533459127Z" level=info msg="CreateContainer within sandbox \"ec12b42ea0750d4030cf8e5de6af1c02af452d9c541f9192f6e0f6ef30e54278\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cb76c436ef976881afbdab377e9e3c15261103b92f85fa1506cac3fc18d4e34d\"" Sep 12 17:04:21.534079 containerd[1504]: time="2025-09-12T17:04:21.534049380Z" level=info msg="StartContainer for \"cb76c436ef976881afbdab377e9e3c15261103b92f85fa1506cac3fc18d4e34d\"" Sep 12 17:04:21.535000 containerd[1504]: time="2025-09-12T17:04:21.534976441Z" level=info msg="connecting to shim cb76c436ef976881afbdab377e9e3c15261103b92f85fa1506cac3fc18d4e34d" address="unix:///run/containerd/s/2b4e8a9a20baaa81fca4f081c576cc2316b587668d1d261a32825a44a12225a6" protocol=ttrpc version=3 Sep 12 17:04:21.577870 systemd[1]: Started cri-containerd-cb76c436ef976881afbdab377e9e3c15261103b92f85fa1506cac3fc18d4e34d.scope - libcontainer container cb76c436ef976881afbdab377e9e3c15261103b92f85fa1506cac3fc18d4e34d. Sep 12 17:04:21.605105 containerd[1504]: time="2025-09-12T17:04:21.605065953Z" level=info msg="StartContainer for \"cb76c436ef976881afbdab377e9e3c15261103b92f85fa1506cac3fc18d4e34d\" returns successfully" Sep 12 17:04:21.617712 systemd[1]: cri-containerd-cb76c436ef976881afbdab377e9e3c15261103b92f85fa1506cac3fc18d4e34d.scope: Deactivated successfully. Sep 12 17:04:21.640564 containerd[1504]: time="2025-09-12T17:04:21.640493398Z" level=info msg="received exit event container_id:\"cb76c436ef976881afbdab377e9e3c15261103b92f85fa1506cac3fc18d4e34d\" id:\"cb76c436ef976881afbdab377e9e3c15261103b92f85fa1506cac3fc18d4e34d\" pid:3163 exited_at:{seconds:1757696661 nanos:638031422}" Sep 12 17:04:21.640732 containerd[1504]: time="2025-09-12T17:04:21.640601720Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cb76c436ef976881afbdab377e9e3c15261103b92f85fa1506cac3fc18d4e34d\" id:\"cb76c436ef976881afbdab377e9e3c15261103b92f85fa1506cac3fc18d4e34d\" pid:3163 exited_at:{seconds:1757696661 nanos:638031422}" Sep 12 17:04:21.947595 containerd[1504]: time="2025-09-12T17:04:21.947541970Z" level=info msg="CreateContainer within sandbox \"ec12b42ea0750d4030cf8e5de6af1c02af452d9c541f9192f6e0f6ef30e54278\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 17:04:21.956922 containerd[1504]: time="2025-09-12T17:04:21.956720419Z" level=info msg="Container 3ead40f3bb443c293d41f3a5c2ee127ccd39f60966bec09e5ff2f8df0f374479: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:04:21.968376 containerd[1504]: time="2025-09-12T17:04:21.968333963Z" level=info msg="CreateContainer within sandbox \"ec12b42ea0750d4030cf8e5de6af1c02af452d9c541f9192f6e0f6ef30e54278\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3ead40f3bb443c293d41f3a5c2ee127ccd39f60966bec09e5ff2f8df0f374479\"" Sep 12 17:04:21.969309 containerd[1504]: time="2025-09-12T17:04:21.969173502Z" level=info msg="StartContainer for \"3ead40f3bb443c293d41f3a5c2ee127ccd39f60966bec09e5ff2f8df0f374479\"" Sep 12 17:04:21.970233 containerd[1504]: time="2025-09-12T17:04:21.970201325Z" level=info msg="connecting to shim 3ead40f3bb443c293d41f3a5c2ee127ccd39f60966bec09e5ff2f8df0f374479" address="unix:///run/containerd/s/2b4e8a9a20baaa81fca4f081c576cc2316b587668d1d261a32825a44a12225a6" protocol=ttrpc version=3 Sep 12 17:04:21.991808 systemd[1]: Started cri-containerd-3ead40f3bb443c293d41f3a5c2ee127ccd39f60966bec09e5ff2f8df0f374479.scope - libcontainer container 3ead40f3bb443c293d41f3a5c2ee127ccd39f60966bec09e5ff2f8df0f374479. Sep 12 17:04:22.017028 containerd[1504]: time="2025-09-12T17:04:22.016932935Z" level=info msg="StartContainer for \"3ead40f3bb443c293d41f3a5c2ee127ccd39f60966bec09e5ff2f8df0f374479\" returns successfully" Sep 12 17:04:22.030297 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:04:22.030534 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:04:22.031347 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:04:22.032863 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:04:22.034136 systemd[1]: cri-containerd-3ead40f3bb443c293d41f3a5c2ee127ccd39f60966bec09e5ff2f8df0f374479.scope: Deactivated successfully. Sep 12 17:04:22.035988 containerd[1504]: time="2025-09-12T17:04:22.035956992Z" level=info msg="received exit event container_id:\"3ead40f3bb443c293d41f3a5c2ee127ccd39f60966bec09e5ff2f8df0f374479\" id:\"3ead40f3bb443c293d41f3a5c2ee127ccd39f60966bec09e5ff2f8df0f374479\" pid:3207 exited_at:{seconds:1757696662 nanos:33518579}" Sep 12 17:04:22.036159 containerd[1504]: time="2025-09-12T17:04:22.036134476Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3ead40f3bb443c293d41f3a5c2ee127ccd39f60966bec09e5ff2f8df0f374479\" id:\"3ead40f3bb443c293d41f3a5c2ee127ccd39f60966bec09e5ff2f8df0f374479\" pid:3207 exited_at:{seconds:1757696662 nanos:33518579}" Sep 12 17:04:22.063715 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:04:22.524110 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb76c436ef976881afbdab377e9e3c15261103b92f85fa1506cac3fc18d4e34d-rootfs.mount: Deactivated successfully. Sep 12 17:04:22.959324 containerd[1504]: time="2025-09-12T17:04:22.959208936Z" level=info msg="CreateContainer within sandbox \"ec12b42ea0750d4030cf8e5de6af1c02af452d9c541f9192f6e0f6ef30e54278\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 17:04:22.968766 containerd[1504]: time="2025-09-12T17:04:22.968675744Z" level=info msg="Container 889cffc6fb8aa0a4d512f6c5fa51d3803d5498beeb5d8d777f5697bdf849a69d: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:04:22.977189 containerd[1504]: time="2025-09-12T17:04:22.977141450Z" level=info msg="CreateContainer within sandbox \"ec12b42ea0750d4030cf8e5de6af1c02af452d9c541f9192f6e0f6ef30e54278\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"889cffc6fb8aa0a4d512f6c5fa51d3803d5498beeb5d8d777f5697bdf849a69d\"" Sep 12 17:04:22.977667 containerd[1504]: time="2025-09-12T17:04:22.977631820Z" level=info msg="StartContainer for \"889cffc6fb8aa0a4d512f6c5fa51d3803d5498beeb5d8d777f5697bdf849a69d\"" Sep 12 17:04:22.980056 containerd[1504]: time="2025-09-12T17:04:22.980015913Z" level=info msg="connecting to shim 889cffc6fb8aa0a4d512f6c5fa51d3803d5498beeb5d8d777f5697bdf849a69d" address="unix:///run/containerd/s/2b4e8a9a20baaa81fca4f081c576cc2316b587668d1d261a32825a44a12225a6" protocol=ttrpc version=3 Sep 12 17:04:23.001856 systemd[1]: Started cri-containerd-889cffc6fb8aa0a4d512f6c5fa51d3803d5498beeb5d8d777f5697bdf849a69d.scope - libcontainer container 889cffc6fb8aa0a4d512f6c5fa51d3803d5498beeb5d8d777f5697bdf849a69d. Sep 12 17:04:23.032377 containerd[1504]: time="2025-09-12T17:04:23.032307839Z" level=info msg="StartContainer for \"889cffc6fb8aa0a4d512f6c5fa51d3803d5498beeb5d8d777f5697bdf849a69d\" returns successfully" Sep 12 17:04:23.035205 systemd[1]: cri-containerd-889cffc6fb8aa0a4d512f6c5fa51d3803d5498beeb5d8d777f5697bdf849a69d.scope: Deactivated successfully. Sep 12 17:04:23.041817 containerd[1504]: time="2025-09-12T17:04:23.041775400Z" level=info msg="received exit event container_id:\"889cffc6fb8aa0a4d512f6c5fa51d3803d5498beeb5d8d777f5697bdf849a69d\" id:\"889cffc6fb8aa0a4d512f6c5fa51d3803d5498beeb5d8d777f5697bdf849a69d\" pid:3257 exited_at:{seconds:1757696663 nanos:41510794}" Sep 12 17:04:23.042008 containerd[1504]: time="2025-09-12T17:04:23.041906482Z" level=info msg="TaskExit event in podsandbox handler container_id:\"889cffc6fb8aa0a4d512f6c5fa51d3803d5498beeb5d8d777f5697bdf849a69d\" id:\"889cffc6fb8aa0a4d512f6c5fa51d3803d5498beeb5d8d777f5697bdf849a69d\" pid:3257 exited_at:{seconds:1757696663 nanos:41510794}" Sep 12 17:04:23.059996 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-889cffc6fb8aa0a4d512f6c5fa51d3803d5498beeb5d8d777f5697bdf849a69d-rootfs.mount: Deactivated successfully. Sep 12 17:04:23.964834 containerd[1504]: time="2025-09-12T17:04:23.964770880Z" level=info msg="CreateContainer within sandbox \"ec12b42ea0750d4030cf8e5de6af1c02af452d9c541f9192f6e0f6ef30e54278\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 17:04:23.994428 containerd[1504]: time="2025-09-12T17:04:23.993616692Z" level=info msg="Container 76ee87e2d7d701a38bbb6d3557eafafa09ffb9dea5c4fd90887f753bbb6fac77: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:04:24.005586 containerd[1504]: time="2025-09-12T17:04:24.005528143Z" level=info msg="CreateContainer within sandbox \"ec12b42ea0750d4030cf8e5de6af1c02af452d9c541f9192f6e0f6ef30e54278\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"76ee87e2d7d701a38bbb6d3557eafafa09ffb9dea5c4fd90887f753bbb6fac77\"" Sep 12 17:04:24.007078 containerd[1504]: time="2025-09-12T17:04:24.007049534Z" level=info msg="StartContainer for \"76ee87e2d7d701a38bbb6d3557eafafa09ffb9dea5c4fd90887f753bbb6fac77\"" Sep 12 17:04:24.011873 containerd[1504]: time="2025-09-12T17:04:24.011572387Z" level=info msg="connecting to shim 76ee87e2d7d701a38bbb6d3557eafafa09ffb9dea5c4fd90887f753bbb6fac77" address="unix:///run/containerd/s/2b4e8a9a20baaa81fca4f081c576cc2316b587668d1d261a32825a44a12225a6" protocol=ttrpc version=3 Sep 12 17:04:24.051855 systemd[1]: Started cri-containerd-76ee87e2d7d701a38bbb6d3557eafafa09ffb9dea5c4fd90887f753bbb6fac77.scope - libcontainer container 76ee87e2d7d701a38bbb6d3557eafafa09ffb9dea5c4fd90887f753bbb6fac77. Sep 12 17:04:24.108998 systemd[1]: cri-containerd-76ee87e2d7d701a38bbb6d3557eafafa09ffb9dea5c4fd90887f753bbb6fac77.scope: Deactivated successfully. Sep 12 17:04:24.110708 containerd[1504]: time="2025-09-12T17:04:24.110602624Z" level=info msg="TaskExit event in podsandbox handler container_id:\"76ee87e2d7d701a38bbb6d3557eafafa09ffb9dea5c4fd90887f753bbb6fac77\" id:\"76ee87e2d7d701a38bbb6d3557eafafa09ffb9dea5c4fd90887f753bbb6fac77\" pid:3297 exited_at:{seconds:1757696664 nanos:110313538}" Sep 12 17:04:24.111466 containerd[1504]: time="2025-09-12T17:04:24.111338879Z" level=info msg="received exit event container_id:\"76ee87e2d7d701a38bbb6d3557eafafa09ffb9dea5c4fd90887f753bbb6fac77\" id:\"76ee87e2d7d701a38bbb6d3557eafafa09ffb9dea5c4fd90887f753bbb6fac77\" pid:3297 exited_at:{seconds:1757696664 nanos:110313538}" Sep 12 17:04:24.113402 containerd[1504]: time="2025-09-12T17:04:24.113376361Z" level=info msg="StartContainer for \"76ee87e2d7d701a38bbb6d3557eafafa09ffb9dea5c4fd90887f753bbb6fac77\" returns successfully" Sep 12 17:04:24.134013 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-76ee87e2d7d701a38bbb6d3557eafafa09ffb9dea5c4fd90887f753bbb6fac77-rootfs.mount: Deactivated successfully. Sep 12 17:04:24.971472 containerd[1504]: time="2025-09-12T17:04:24.970951399Z" level=info msg="CreateContainer within sandbox \"ec12b42ea0750d4030cf8e5de6af1c02af452d9c541f9192f6e0f6ef30e54278\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 17:04:24.985099 containerd[1504]: time="2025-09-12T17:04:24.985052049Z" level=info msg="Container 591346ebd68f2014821ef554e89785d19f4ff28dafdf0168c4896cddde05ac68: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:04:25.000290 containerd[1504]: time="2025-09-12T17:04:25.000237721Z" level=info msg="CreateContainer within sandbox \"ec12b42ea0750d4030cf8e5de6af1c02af452d9c541f9192f6e0f6ef30e54278\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"591346ebd68f2014821ef554e89785d19f4ff28dafdf0168c4896cddde05ac68\"" Sep 12 17:04:25.000979 containerd[1504]: time="2025-09-12T17:04:25.000941896Z" level=info msg="StartContainer for \"591346ebd68f2014821ef554e89785d19f4ff28dafdf0168c4896cddde05ac68\"" Sep 12 17:04:25.002001 containerd[1504]: time="2025-09-12T17:04:25.001949356Z" level=info msg="connecting to shim 591346ebd68f2014821ef554e89785d19f4ff28dafdf0168c4896cddde05ac68" address="unix:///run/containerd/s/2b4e8a9a20baaa81fca4f081c576cc2316b587668d1d261a32825a44a12225a6" protocol=ttrpc version=3 Sep 12 17:04:25.027883 systemd[1]: Started cri-containerd-591346ebd68f2014821ef554e89785d19f4ff28dafdf0168c4896cddde05ac68.scope - libcontainer container 591346ebd68f2014821ef554e89785d19f4ff28dafdf0168c4896cddde05ac68. Sep 12 17:04:25.066541 containerd[1504]: time="2025-09-12T17:04:25.066504044Z" level=info msg="StartContainer for \"591346ebd68f2014821ef554e89785d19f4ff28dafdf0168c4896cddde05ac68\" returns successfully" Sep 12 17:04:25.143902 containerd[1504]: time="2025-09-12T17:04:25.143856186Z" level=info msg="TaskExit event in podsandbox handler container_id:\"591346ebd68f2014821ef554e89785d19f4ff28dafdf0168c4896cddde05ac68\" id:\"14ab91217fafc58b073aba79e0e6656313f13157833be3faab952c54ccb91952\" pid:3364 exited_at:{seconds:1757696665 nanos:143572420}" Sep 12 17:04:25.181098 kubelet[2654]: I0912 17:04:25.181055 2654 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 12 17:04:25.234500 systemd[1]: Created slice kubepods-burstable-pod806b878e_5a49_48aa_838b_13e596203c4f.slice - libcontainer container kubepods-burstable-pod806b878e_5a49_48aa_838b_13e596203c4f.slice. Sep 12 17:04:25.240798 systemd[1]: Created slice kubepods-burstable-podc51bfef8_2036_46d0_b33f_a6c2c1091713.slice - libcontainer container kubepods-burstable-podc51bfef8_2036_46d0_b33f_a6c2c1091713.slice. Sep 12 17:04:25.325626 kubelet[2654]: I0912 17:04:25.325431 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/806b878e-5a49-48aa-838b-13e596203c4f-config-volume\") pod \"coredns-7c65d6cfc9-mrlnn\" (UID: \"806b878e-5a49-48aa-838b-13e596203c4f\") " pod="kube-system/coredns-7c65d6cfc9-mrlnn" Sep 12 17:04:25.325626 kubelet[2654]: I0912 17:04:25.325486 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlgz4\" (UniqueName: \"kubernetes.io/projected/806b878e-5a49-48aa-838b-13e596203c4f-kube-api-access-rlgz4\") pod \"coredns-7c65d6cfc9-mrlnn\" (UID: \"806b878e-5a49-48aa-838b-13e596203c4f\") " pod="kube-system/coredns-7c65d6cfc9-mrlnn" Sep 12 17:04:25.325626 kubelet[2654]: I0912 17:04:25.325512 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhglq\" (UniqueName: \"kubernetes.io/projected/c51bfef8-2036-46d0-b33f-a6c2c1091713-kube-api-access-dhglq\") pod \"coredns-7c65d6cfc9-8v4tw\" (UID: \"c51bfef8-2036-46d0-b33f-a6c2c1091713\") " pod="kube-system/coredns-7c65d6cfc9-8v4tw" Sep 12 17:04:25.325626 kubelet[2654]: I0912 17:04:25.325532 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c51bfef8-2036-46d0-b33f-a6c2c1091713-config-volume\") pod \"coredns-7c65d6cfc9-8v4tw\" (UID: \"c51bfef8-2036-46d0-b33f-a6c2c1091713\") " pod="kube-system/coredns-7c65d6cfc9-8v4tw" Sep 12 17:04:25.540544 containerd[1504]: time="2025-09-12T17:04:25.540389133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mrlnn,Uid:806b878e-5a49-48aa-838b-13e596203c4f,Namespace:kube-system,Attempt:0,}" Sep 12 17:04:25.543195 containerd[1504]: time="2025-09-12T17:04:25.542973145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8v4tw,Uid:c51bfef8-2036-46d0-b33f-a6c2c1091713,Namespace:kube-system,Attempt:0,}" Sep 12 17:04:26.033289 systemd[1]: Started sshd@8-10.0.0.14:22-10.0.0.1:38974.service - OpenSSH per-connection server daemon (10.0.0.1:38974). Sep 12 17:04:26.082857 sshd[3465]: Accepted publickey for core from 10.0.0.1 port 38974 ssh2: RSA SHA256:zGEfOPQeThQpVzN9sa62qLAN/hzBbgDs44dXJ/syq0Y Sep 12 17:04:26.084837 sshd-session[3465]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:04:26.090168 systemd-logind[1486]: New session 9 of user core. Sep 12 17:04:26.103859 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 17:04:26.218585 sshd[3468]: Connection closed by 10.0.0.1 port 38974 Sep 12 17:04:26.218912 sshd-session[3465]: pam_unix(sshd:session): session closed for user core Sep 12 17:04:26.222627 systemd-logind[1486]: Session 9 logged out. Waiting for processes to exit. Sep 12 17:04:26.222752 systemd[1]: sshd@8-10.0.0.14:22-10.0.0.1:38974.service: Deactivated successfully. Sep 12 17:04:26.224472 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 17:04:26.226936 systemd-logind[1486]: Removed session 9. Sep 12 17:04:27.107162 systemd-networkd[1440]: cilium_host: Link UP Sep 12 17:04:27.107286 systemd-networkd[1440]: cilium_net: Link UP Sep 12 17:04:27.107405 systemd-networkd[1440]: cilium_net: Gained carrier Sep 12 17:04:27.107535 systemd-networkd[1440]: cilium_host: Gained carrier Sep 12 17:04:27.193000 systemd-networkd[1440]: cilium_vxlan: Link UP Sep 12 17:04:27.193206 systemd-networkd[1440]: cilium_vxlan: Gained carrier Sep 12 17:04:27.199920 systemd-networkd[1440]: cilium_net: Gained IPv6LL Sep 12 17:04:27.464667 kernel: NET: Registered PF_ALG protocol family Sep 12 17:04:27.902797 systemd-networkd[1440]: cilium_host: Gained IPv6LL Sep 12 17:04:28.107857 systemd-networkd[1440]: lxc_health: Link UP Sep 12 17:04:28.108199 systemd-networkd[1440]: lxc_health: Gained carrier Sep 12 17:04:28.350893 systemd-networkd[1440]: cilium_vxlan: Gained IPv6LL Sep 12 17:04:28.607513 systemd-networkd[1440]: lxc486e7108c736: Link UP Sep 12 17:04:28.607745 systemd-networkd[1440]: lxc1f93b7353c22: Link UP Sep 12 17:04:28.617694 kernel: eth0: renamed from tmp06700 Sep 12 17:04:28.618129 kernel: eth0: renamed from tmp00872 Sep 12 17:04:28.621616 systemd-networkd[1440]: lxc486e7108c736: Gained carrier Sep 12 17:04:28.626004 systemd-networkd[1440]: lxc1f93b7353c22: Gained carrier Sep 12 17:04:29.330785 kubelet[2654]: I0912 17:04:29.329850 2654 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-86w52" podStartSLOduration=10.217484779 podStartE2EDuration="32.329834176s" podCreationTimestamp="2025-09-12 17:03:57 +0000 UTC" firstStartedPulling="2025-09-12 17:03:59.403190883 +0000 UTC m=+7.698012491" lastFinishedPulling="2025-09-12 17:04:21.51554028 +0000 UTC m=+29.810361888" observedRunningTime="2025-09-12 17:04:25.995825255 +0000 UTC m=+34.290646863" watchObservedRunningTime="2025-09-12 17:04:29.329834176 +0000 UTC m=+37.624655784" Sep 12 17:04:29.758969 systemd-networkd[1440]: lxc_health: Gained IPv6LL Sep 12 17:04:29.888051 systemd-networkd[1440]: lxc486e7108c736: Gained IPv6LL Sep 12 17:04:30.462912 systemd-networkd[1440]: lxc1f93b7353c22: Gained IPv6LL Sep 12 17:04:31.229753 systemd[1]: Started sshd@9-10.0.0.14:22-10.0.0.1:36860.service - OpenSSH per-connection server daemon (10.0.0.1:36860). Sep 12 17:04:31.295609 sshd[3866]: Accepted publickey for core from 10.0.0.1 port 36860 ssh2: RSA SHA256:zGEfOPQeThQpVzN9sa62qLAN/hzBbgDs44dXJ/syq0Y Sep 12 17:04:31.297252 sshd-session[3866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:04:31.302664 systemd-logind[1486]: New session 10 of user core. Sep 12 17:04:31.309845 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 17:04:31.443298 sshd[3869]: Connection closed by 10.0.0.1 port 36860 Sep 12 17:04:31.443871 sshd-session[3866]: pam_unix(sshd:session): session closed for user core Sep 12 17:04:31.447400 systemd[1]: sshd@9-10.0.0.14:22-10.0.0.1:36860.service: Deactivated successfully. Sep 12 17:04:31.451199 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 17:04:31.452117 systemd-logind[1486]: Session 10 logged out. Waiting for processes to exit. Sep 12 17:04:31.453304 systemd-logind[1486]: Removed session 10. Sep 12 17:04:32.176366 containerd[1504]: time="2025-09-12T17:04:32.176244973Z" level=info msg="connecting to shim 00872a26daf8f1bcbb7305bf851a50eb78a6a8147eaf51e1fd08ee76fdc93c95" address="unix:///run/containerd/s/0d0f9483478ffb28d4c7f2da239ef788b8dcdbcf136c5306a8ec8d7134e90124" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:04:32.177167 containerd[1504]: time="2025-09-12T17:04:32.177137988Z" level=info msg="connecting to shim 06700a40c907db706f02b910c34bfbcb706e512b5133b0576c398ffaaaea8d4e" address="unix:///run/containerd/s/bf7b9789fa281abe4eb25b79e50e707165ea4839f1434c5aed71707d32a5ca81" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:04:32.209846 systemd[1]: Started cri-containerd-00872a26daf8f1bcbb7305bf851a50eb78a6a8147eaf51e1fd08ee76fdc93c95.scope - libcontainer container 00872a26daf8f1bcbb7305bf851a50eb78a6a8147eaf51e1fd08ee76fdc93c95. Sep 12 17:04:32.211029 systemd[1]: Started cri-containerd-06700a40c907db706f02b910c34bfbcb706e512b5133b0576c398ffaaaea8d4e.scope - libcontainer container 06700a40c907db706f02b910c34bfbcb706e512b5133b0576c398ffaaaea8d4e. Sep 12 17:04:32.224029 systemd-resolved[1358]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:04:32.225826 systemd-resolved[1358]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:04:32.247009 containerd[1504]: time="2025-09-12T17:04:32.246967581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8v4tw,Uid:c51bfef8-2036-46d0-b33f-a6c2c1091713,Namespace:kube-system,Attempt:0,} returns sandbox id \"06700a40c907db706f02b910c34bfbcb706e512b5133b0576c398ffaaaea8d4e\"" Sep 12 17:04:32.249232 containerd[1504]: time="2025-09-12T17:04:32.249180018Z" level=info msg="CreateContainer within sandbox \"06700a40c907db706f02b910c34bfbcb706e512b5133b0576c398ffaaaea8d4e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:04:32.262381 containerd[1504]: time="2025-09-12T17:04:32.261816307Z" level=info msg="Container e26b88efae53415ff0c1cd9cd800272cc81954128c7a0e27f82af78925fac388: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:04:32.267398 containerd[1504]: time="2025-09-12T17:04:32.267360118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mrlnn,Uid:806b878e-5a49-48aa-838b-13e596203c4f,Namespace:kube-system,Attempt:0,} returns sandbox id \"00872a26daf8f1bcbb7305bf851a50eb78a6a8147eaf51e1fd08ee76fdc93c95\"" Sep 12 17:04:32.270554 containerd[1504]: time="2025-09-12T17:04:32.270416169Z" level=info msg="CreateContainer within sandbox \"06700a40c907db706f02b910c34bfbcb706e512b5133b0576c398ffaaaea8d4e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e26b88efae53415ff0c1cd9cd800272cc81954128c7a0e27f82af78925fac388\"" Sep 12 17:04:32.271188 containerd[1504]: time="2025-09-12T17:04:32.271003779Z" level=info msg="StartContainer for \"e26b88efae53415ff0c1cd9cd800272cc81954128c7a0e27f82af78925fac388\"" Sep 12 17:04:32.271880 containerd[1504]: time="2025-09-12T17:04:32.271852633Z" level=info msg="connecting to shim e26b88efae53415ff0c1cd9cd800272cc81954128c7a0e27f82af78925fac388" address="unix:///run/containerd/s/bf7b9789fa281abe4eb25b79e50e707165ea4839f1434c5aed71707d32a5ca81" protocol=ttrpc version=3 Sep 12 17:04:32.273800 containerd[1504]: time="2025-09-12T17:04:32.273397178Z" level=info msg="CreateContainer within sandbox \"00872a26daf8f1bcbb7305bf851a50eb78a6a8147eaf51e1fd08ee76fdc93c95\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:04:32.282743 containerd[1504]: time="2025-09-12T17:04:32.282675731Z" level=info msg="Container f79cb03034ded0bc676f7d23f87377c05c408f52f590da87b7840219570344dd: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:04:32.289467 containerd[1504]: time="2025-09-12T17:04:32.289432003Z" level=info msg="CreateContainer within sandbox \"00872a26daf8f1bcbb7305bf851a50eb78a6a8147eaf51e1fd08ee76fdc93c95\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f79cb03034ded0bc676f7d23f87377c05c408f52f590da87b7840219570344dd\"" Sep 12 17:04:32.291918 containerd[1504]: time="2025-09-12T17:04:32.291427716Z" level=info msg="StartContainer for \"f79cb03034ded0bc676f7d23f87377c05c408f52f590da87b7840219570344dd\"" Sep 12 17:04:32.292972 containerd[1504]: time="2025-09-12T17:04:32.292943101Z" level=info msg="connecting to shim f79cb03034ded0bc676f7d23f87377c05c408f52f590da87b7840219570344dd" address="unix:///run/containerd/s/0d0f9483478ffb28d4c7f2da239ef788b8dcdbcf136c5306a8ec8d7134e90124" protocol=ttrpc version=3 Sep 12 17:04:32.293801 systemd[1]: Started cri-containerd-e26b88efae53415ff0c1cd9cd800272cc81954128c7a0e27f82af78925fac388.scope - libcontainer container e26b88efae53415ff0c1cd9cd800272cc81954128c7a0e27f82af78925fac388. Sep 12 17:04:32.318824 systemd[1]: Started cri-containerd-f79cb03034ded0bc676f7d23f87377c05c408f52f590da87b7840219570344dd.scope - libcontainer container f79cb03034ded0bc676f7d23f87377c05c408f52f590da87b7840219570344dd. Sep 12 17:04:32.358453 containerd[1504]: time="2025-09-12T17:04:32.358413103Z" level=info msg="StartContainer for \"f79cb03034ded0bc676f7d23f87377c05c408f52f590da87b7840219570344dd\" returns successfully" Sep 12 17:04:32.367734 containerd[1504]: time="2025-09-12T17:04:32.367689936Z" level=info msg="StartContainer for \"e26b88efae53415ff0c1cd9cd800272cc81954128c7a0e27f82af78925fac388\" returns successfully" Sep 12 17:04:33.005809 kubelet[2654]: I0912 17:04:33.005036 2654 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-mrlnn" podStartSLOduration=36.004985266 podStartE2EDuration="36.004985266s" podCreationTimestamp="2025-09-12 17:03:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:04:33.003482962 +0000 UTC m=+41.298304570" watchObservedRunningTime="2025-09-12 17:04:33.004985266 +0000 UTC m=+41.299806834" Sep 12 17:04:36.458028 systemd[1]: Started sshd@10-10.0.0.14:22-10.0.0.1:36892.service - OpenSSH per-connection server daemon (10.0.0.1:36892). Sep 12 17:04:36.509995 sshd[4057]: Accepted publickey for core from 10.0.0.1 port 36892 ssh2: RSA SHA256:zGEfOPQeThQpVzN9sa62qLAN/hzBbgDs44dXJ/syq0Y Sep 12 17:04:36.512501 sshd-session[4057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:04:36.520395 systemd-logind[1486]: New session 11 of user core. Sep 12 17:04:36.542923 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 17:04:36.664409 sshd[4060]: Connection closed by 10.0.0.1 port 36892 Sep 12 17:04:36.664857 sshd-session[4057]: pam_unix(sshd:session): session closed for user core Sep 12 17:04:36.678837 systemd[1]: sshd@10-10.0.0.14:22-10.0.0.1:36892.service: Deactivated successfully. Sep 12 17:04:36.680877 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 17:04:36.681909 systemd-logind[1486]: Session 11 logged out. Waiting for processes to exit. Sep 12 17:04:36.685061 systemd[1]: Started sshd@11-10.0.0.14:22-10.0.0.1:36900.service - OpenSSH per-connection server daemon (10.0.0.1:36900). Sep 12 17:04:36.686041 systemd-logind[1486]: Removed session 11. Sep 12 17:04:36.751296 sshd[4074]: Accepted publickey for core from 10.0.0.1 port 36900 ssh2: RSA SHA256:zGEfOPQeThQpVzN9sa62qLAN/hzBbgDs44dXJ/syq0Y Sep 12 17:04:36.751898 sshd-session[4074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:04:36.756502 systemd-logind[1486]: New session 12 of user core. Sep 12 17:04:36.763888 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 17:04:36.923664 sshd[4077]: Connection closed by 10.0.0.1 port 36900 Sep 12 17:04:36.922674 sshd-session[4074]: pam_unix(sshd:session): session closed for user core Sep 12 17:04:36.933403 systemd[1]: sshd@11-10.0.0.14:22-10.0.0.1:36900.service: Deactivated successfully. Sep 12 17:04:36.938966 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 17:04:36.942145 systemd-logind[1486]: Session 12 logged out. Waiting for processes to exit. Sep 12 17:04:36.943904 systemd[1]: Started sshd@12-10.0.0.14:22-10.0.0.1:36904.service - OpenSSH per-connection server daemon (10.0.0.1:36904). Sep 12 17:04:36.948675 systemd-logind[1486]: Removed session 12. Sep 12 17:04:36.996554 sshd[4088]: Accepted publickey for core from 10.0.0.1 port 36904 ssh2: RSA SHA256:zGEfOPQeThQpVzN9sa62qLAN/hzBbgDs44dXJ/syq0Y Sep 12 17:04:36.998995 sshd-session[4088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:04:37.004468 systemd-logind[1486]: New session 13 of user core. Sep 12 17:04:37.014815 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 17:04:37.127186 sshd[4091]: Connection closed by 10.0.0.1 port 36904 Sep 12 17:04:37.127526 sshd-session[4088]: pam_unix(sshd:session): session closed for user core Sep 12 17:04:37.130975 systemd-logind[1486]: Session 13 logged out. Waiting for processes to exit. Sep 12 17:04:37.131258 systemd[1]: sshd@12-10.0.0.14:22-10.0.0.1:36904.service: Deactivated successfully. Sep 12 17:04:37.134077 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 17:04:37.135614 systemd-logind[1486]: Removed session 13. Sep 12 17:04:42.143113 systemd[1]: Started sshd@13-10.0.0.14:22-10.0.0.1:49324.service - OpenSSH per-connection server daemon (10.0.0.1:49324). Sep 12 17:04:42.204080 sshd[4106]: Accepted publickey for core from 10.0.0.1 port 49324 ssh2: RSA SHA256:zGEfOPQeThQpVzN9sa62qLAN/hzBbgDs44dXJ/syq0Y Sep 12 17:04:42.205536 sshd-session[4106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:04:42.209578 systemd-logind[1486]: New session 14 of user core. Sep 12 17:04:42.220868 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 17:04:42.348027 sshd[4109]: Connection closed by 10.0.0.1 port 49324 Sep 12 17:04:42.348359 sshd-session[4106]: pam_unix(sshd:session): session closed for user core Sep 12 17:04:42.353187 systemd[1]: sshd@13-10.0.0.14:22-10.0.0.1:49324.service: Deactivated successfully. Sep 12 17:04:42.355737 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 17:04:42.357862 systemd-logind[1486]: Session 14 logged out. Waiting for processes to exit. Sep 12 17:04:42.359276 systemd-logind[1486]: Removed session 14. Sep 12 17:04:47.359967 systemd[1]: Started sshd@14-10.0.0.14:22-10.0.0.1:49334.service - OpenSSH per-connection server daemon (10.0.0.1:49334). Sep 12 17:04:47.425737 sshd[4123]: Accepted publickey for core from 10.0.0.1 port 49334 ssh2: RSA SHA256:zGEfOPQeThQpVzN9sa62qLAN/hzBbgDs44dXJ/syq0Y Sep 12 17:04:47.427020 sshd-session[4123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:04:47.433155 systemd-logind[1486]: New session 15 of user core. Sep 12 17:04:47.443970 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 17:04:47.564310 sshd[4126]: Connection closed by 10.0.0.1 port 49334 Sep 12 17:04:47.564841 sshd-session[4123]: pam_unix(sshd:session): session closed for user core Sep 12 17:04:47.575326 systemd[1]: sshd@14-10.0.0.14:22-10.0.0.1:49334.service: Deactivated successfully. Sep 12 17:04:47.577191 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 17:04:47.579284 systemd-logind[1486]: Session 15 logged out. Waiting for processes to exit. Sep 12 17:04:47.586447 systemd[1]: Started sshd@15-10.0.0.14:22-10.0.0.1:49348.service - OpenSSH per-connection server daemon (10.0.0.1:49348). Sep 12 17:04:47.587556 systemd-logind[1486]: Removed session 15. Sep 12 17:04:47.638439 sshd[4139]: Accepted publickey for core from 10.0.0.1 port 49348 ssh2: RSA SHA256:zGEfOPQeThQpVzN9sa62qLAN/hzBbgDs44dXJ/syq0Y Sep 12 17:04:47.640122 sshd-session[4139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:04:47.646121 systemd-logind[1486]: New session 16 of user core. Sep 12 17:04:47.655818 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 17:04:47.849837 sshd[4142]: Connection closed by 10.0.0.1 port 49348 Sep 12 17:04:47.850397 sshd-session[4139]: pam_unix(sshd:session): session closed for user core Sep 12 17:04:47.859715 systemd[1]: sshd@15-10.0.0.14:22-10.0.0.1:49348.service: Deactivated successfully. Sep 12 17:04:47.861794 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 17:04:47.862572 systemd-logind[1486]: Session 16 logged out. Waiting for processes to exit. Sep 12 17:04:47.865073 systemd[1]: Started sshd@16-10.0.0.14:22-10.0.0.1:49350.service - OpenSSH per-connection server daemon (10.0.0.1:49350). Sep 12 17:04:47.866334 systemd-logind[1486]: Removed session 16. Sep 12 17:04:47.929385 sshd[4153]: Accepted publickey for core from 10.0.0.1 port 49350 ssh2: RSA SHA256:zGEfOPQeThQpVzN9sa62qLAN/hzBbgDs44dXJ/syq0Y Sep 12 17:04:47.930679 sshd-session[4153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:04:47.934855 systemd-logind[1486]: New session 17 of user core. Sep 12 17:04:47.945819 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 17:04:49.161684 sshd[4156]: Connection closed by 10.0.0.1 port 49350 Sep 12 17:04:49.162041 sshd-session[4153]: pam_unix(sshd:session): session closed for user core Sep 12 17:04:49.177483 systemd[1]: sshd@16-10.0.0.14:22-10.0.0.1:49350.service: Deactivated successfully. Sep 12 17:04:49.182567 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 17:04:49.185443 systemd-logind[1486]: Session 17 logged out. Waiting for processes to exit. Sep 12 17:04:49.189933 systemd[1]: Started sshd@17-10.0.0.14:22-10.0.0.1:49352.service - OpenSSH per-connection server daemon (10.0.0.1:49352). Sep 12 17:04:49.192893 systemd-logind[1486]: Removed session 17. Sep 12 17:04:49.245501 sshd[4176]: Accepted publickey for core from 10.0.0.1 port 49352 ssh2: RSA SHA256:zGEfOPQeThQpVzN9sa62qLAN/hzBbgDs44dXJ/syq0Y Sep 12 17:04:49.247072 sshd-session[4176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:04:49.251754 systemd-logind[1486]: New session 18 of user core. Sep 12 17:04:49.266829 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 17:04:49.485665 sshd[4179]: Connection closed by 10.0.0.1 port 49352 Sep 12 17:04:49.486224 sshd-session[4176]: pam_unix(sshd:session): session closed for user core Sep 12 17:04:49.500149 systemd[1]: sshd@17-10.0.0.14:22-10.0.0.1:49352.service: Deactivated successfully. Sep 12 17:04:49.502029 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 17:04:49.503249 systemd-logind[1486]: Session 18 logged out. Waiting for processes to exit. Sep 12 17:04:49.505041 systemd-logind[1486]: Removed session 18. Sep 12 17:04:49.507008 systemd[1]: Started sshd@18-10.0.0.14:22-10.0.0.1:49356.service - OpenSSH per-connection server daemon (10.0.0.1:49356). Sep 12 17:04:49.564322 sshd[4191]: Accepted publickey for core from 10.0.0.1 port 49356 ssh2: RSA SHA256:zGEfOPQeThQpVzN9sa62qLAN/hzBbgDs44dXJ/syq0Y Sep 12 17:04:49.565661 sshd-session[4191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:04:49.570265 systemd-logind[1486]: New session 19 of user core. Sep 12 17:04:49.584804 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 17:04:49.704821 sshd[4194]: Connection closed by 10.0.0.1 port 49356 Sep 12 17:04:49.705508 sshd-session[4191]: pam_unix(sshd:session): session closed for user core Sep 12 17:04:49.709020 systemd[1]: sshd@18-10.0.0.14:22-10.0.0.1:49356.service: Deactivated successfully. Sep 12 17:04:49.712822 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 17:04:49.714039 systemd-logind[1486]: Session 19 logged out. Waiting for processes to exit. Sep 12 17:04:49.714930 systemd-logind[1486]: Removed session 19. Sep 12 17:04:54.721279 systemd[1]: Started sshd@19-10.0.0.14:22-10.0.0.1:39580.service - OpenSSH per-connection server daemon (10.0.0.1:39580). Sep 12 17:04:54.771274 sshd[4213]: Accepted publickey for core from 10.0.0.1 port 39580 ssh2: RSA SHA256:zGEfOPQeThQpVzN9sa62qLAN/hzBbgDs44dXJ/syq0Y Sep 12 17:04:54.772399 sshd-session[4213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:04:54.776623 systemd-logind[1486]: New session 20 of user core. Sep 12 17:04:54.784922 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 17:04:54.896632 sshd[4216]: Connection closed by 10.0.0.1 port 39580 Sep 12 17:04:54.897140 sshd-session[4213]: pam_unix(sshd:session): session closed for user core Sep 12 17:04:54.900516 systemd[1]: sshd@19-10.0.0.14:22-10.0.0.1:39580.service: Deactivated successfully. Sep 12 17:04:54.902091 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 17:04:54.902828 systemd-logind[1486]: Session 20 logged out. Waiting for processes to exit. Sep 12 17:04:54.904413 systemd-logind[1486]: Removed session 20. Sep 12 17:04:59.912993 systemd[1]: Started sshd@20-10.0.0.14:22-10.0.0.1:58220.service - OpenSSH per-connection server daemon (10.0.0.1:58220). Sep 12 17:04:59.971027 sshd[4231]: Accepted publickey for core from 10.0.0.1 port 58220 ssh2: RSA SHA256:zGEfOPQeThQpVzN9sa62qLAN/hzBbgDs44dXJ/syq0Y Sep 12 17:04:59.972337 sshd-session[4231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:04:59.981319 systemd-logind[1486]: New session 21 of user core. Sep 12 17:04:59.997835 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 17:05:00.129443 sshd[4234]: Connection closed by 10.0.0.1 port 58220 Sep 12 17:05:00.129847 sshd-session[4231]: pam_unix(sshd:session): session closed for user core Sep 12 17:05:00.133692 systemd-logind[1486]: Session 21 logged out. Waiting for processes to exit. Sep 12 17:05:00.134304 systemd[1]: sshd@20-10.0.0.14:22-10.0.0.1:58220.service: Deactivated successfully. Sep 12 17:05:00.135966 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 17:05:00.139082 systemd-logind[1486]: Removed session 21. Sep 12 17:05:05.140816 systemd[1]: Started sshd@21-10.0.0.14:22-10.0.0.1:58226.service - OpenSSH per-connection server daemon (10.0.0.1:58226). Sep 12 17:05:05.196452 sshd[4248]: Accepted publickey for core from 10.0.0.1 port 58226 ssh2: RSA SHA256:zGEfOPQeThQpVzN9sa62qLAN/hzBbgDs44dXJ/syq0Y Sep 12 17:05:05.197484 sshd-session[4248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:05:05.201939 systemd-logind[1486]: New session 22 of user core. Sep 12 17:05:05.213834 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 17:05:05.322700 sshd[4251]: Connection closed by 10.0.0.1 port 58226 Sep 12 17:05:05.323191 sshd-session[4248]: pam_unix(sshd:session): session closed for user core Sep 12 17:05:05.336829 systemd[1]: sshd@21-10.0.0.14:22-10.0.0.1:58226.service: Deactivated successfully. Sep 12 17:05:05.338297 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 17:05:05.338947 systemd-logind[1486]: Session 22 logged out. Waiting for processes to exit. Sep 12 17:05:05.341105 systemd[1]: Started sshd@22-10.0.0.14:22-10.0.0.1:58230.service - OpenSSH per-connection server daemon (10.0.0.1:58230). Sep 12 17:05:05.342964 systemd-logind[1486]: Removed session 22. Sep 12 17:05:05.395313 sshd[4265]: Accepted publickey for core from 10.0.0.1 port 58230 ssh2: RSA SHA256:zGEfOPQeThQpVzN9sa62qLAN/hzBbgDs44dXJ/syq0Y Sep 12 17:05:05.396763 sshd-session[4265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:05:05.400668 systemd-logind[1486]: New session 23 of user core. Sep 12 17:05:05.407810 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 17:05:06.903941 kubelet[2654]: I0912 17:05:06.903867 2654 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-8v4tw" podStartSLOduration=69.903845737 podStartE2EDuration="1m9.903845737s" podCreationTimestamp="2025-09-12 17:03:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:04:33.035362597 +0000 UTC m=+41.330184205" watchObservedRunningTime="2025-09-12 17:05:06.903845737 +0000 UTC m=+75.198667345" Sep 12 17:05:06.915120 containerd[1504]: time="2025-09-12T17:05:06.914790040Z" level=info msg="StopContainer for \"80cfb91ada6d693be8e743a5464457d2281bc2188cf519db9fed2b924962ee0d\" with timeout 30 (s)" Sep 12 17:05:06.923988 containerd[1504]: time="2025-09-12T17:05:06.923940866Z" level=info msg="Stop container \"80cfb91ada6d693be8e743a5464457d2281bc2188cf519db9fed2b924962ee0d\" with signal terminated" Sep 12 17:05:06.937770 systemd[1]: cri-containerd-80cfb91ada6d693be8e743a5464457d2281bc2188cf519db9fed2b924962ee0d.scope: Deactivated successfully. Sep 12 17:05:06.940970 containerd[1504]: time="2025-09-12T17:05:06.940930480Z" level=info msg="received exit event container_id:\"80cfb91ada6d693be8e743a5464457d2281bc2188cf519db9fed2b924962ee0d\" id:\"80cfb91ada6d693be8e743a5464457d2281bc2188cf519db9fed2b924962ee0d\" pid:3064 exited_at:{seconds:1757696706 nanos:940696761}" Sep 12 17:05:06.941259 containerd[1504]: time="2025-09-12T17:05:06.941236880Z" level=info msg="TaskExit event in podsandbox handler container_id:\"80cfb91ada6d693be8e743a5464457d2281bc2188cf519db9fed2b924962ee0d\" id:\"80cfb91ada6d693be8e743a5464457d2281bc2188cf519db9fed2b924962ee0d\" pid:3064 exited_at:{seconds:1757696706 nanos:940696761}" Sep 12 17:05:06.942993 containerd[1504]: time="2025-09-12T17:05:06.942965717Z" level=info msg="TaskExit event in podsandbox handler container_id:\"591346ebd68f2014821ef554e89785d19f4ff28dafdf0168c4896cddde05ac68\" id:\"083643fe9014e4786d33a8e07bda59b6d110e7b38a0bdb55c85399268c5799db\" pid:4289 exited_at:{seconds:1757696706 nanos:942212078}" Sep 12 17:05:06.943787 containerd[1504]: time="2025-09-12T17:05:06.943747276Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:05:06.945876 containerd[1504]: time="2025-09-12T17:05:06.945846673Z" level=info msg="StopContainer for \"591346ebd68f2014821ef554e89785d19f4ff28dafdf0168c4896cddde05ac68\" with timeout 2 (s)" Sep 12 17:05:06.946194 containerd[1504]: time="2025-09-12T17:05:06.946169072Z" level=info msg="Stop container \"591346ebd68f2014821ef554e89785d19f4ff28dafdf0168c4896cddde05ac68\" with signal terminated" Sep 12 17:05:06.954064 systemd-networkd[1440]: lxc_health: Link DOWN Sep 12 17:05:06.954070 systemd-networkd[1440]: lxc_health: Lost carrier Sep 12 17:05:06.954823 systemd-resolved[1358]: lxc_health: Failed to determine whether the interface is managed, ignoring: No such file or directory Sep 12 17:05:06.969054 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-80cfb91ada6d693be8e743a5464457d2281bc2188cf519db9fed2b924962ee0d-rootfs.mount: Deactivated successfully. Sep 12 17:05:06.974183 systemd[1]: cri-containerd-591346ebd68f2014821ef554e89785d19f4ff28dafdf0168c4896cddde05ac68.scope: Deactivated successfully. Sep 12 17:05:06.974466 systemd[1]: cri-containerd-591346ebd68f2014821ef554e89785d19f4ff28dafdf0168c4896cddde05ac68.scope: Consumed 6.228s CPU time, 123.2M memory peak, 152K read from disk, 12.9M written to disk. Sep 12 17:05:06.977227 containerd[1504]: time="2025-09-12T17:05:06.977175224Z" level=info msg="received exit event container_id:\"591346ebd68f2014821ef554e89785d19f4ff28dafdf0168c4896cddde05ac68\" id:\"591346ebd68f2014821ef554e89785d19f4ff28dafdf0168c4896cddde05ac68\" pid:3334 exited_at:{seconds:1757696706 nanos:976845585}" Sep 12 17:05:06.977564 containerd[1504]: time="2025-09-12T17:05:06.977277944Z" level=info msg="TaskExit event in podsandbox handler container_id:\"591346ebd68f2014821ef554e89785d19f4ff28dafdf0168c4896cddde05ac68\" id:\"591346ebd68f2014821ef554e89785d19f4ff28dafdf0168c4896cddde05ac68\" pid:3334 exited_at:{seconds:1757696706 nanos:976845585}" Sep 12 17:05:06.987421 containerd[1504]: time="2025-09-12T17:05:06.987382049Z" level=info msg="StopContainer for \"80cfb91ada6d693be8e743a5464457d2281bc2188cf519db9fed2b924962ee0d\" returns successfully" Sep 12 17:05:06.990905 containerd[1504]: time="2025-09-12T17:05:06.990862403Z" level=info msg="StopPodSandbox for \"657c36cf6aac9a41a1d12130c8e5d82d9c52a6974bae08bcdf06d9808e1728b1\"" Sep 12 17:05:06.994687 containerd[1504]: time="2025-09-12T17:05:06.994619438Z" level=info msg="Container to stop \"80cfb91ada6d693be8e743a5464457d2281bc2188cf519db9fed2b924962ee0d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:05:07.004829 systemd[1]: cri-containerd-657c36cf6aac9a41a1d12130c8e5d82d9c52a6974bae08bcdf06d9808e1728b1.scope: Deactivated successfully. Sep 12 17:05:07.007610 containerd[1504]: time="2025-09-12T17:05:07.007274940Z" level=info msg="TaskExit event in podsandbox handler container_id:\"657c36cf6aac9a41a1d12130c8e5d82d9c52a6974bae08bcdf06d9808e1728b1\" id:\"657c36cf6aac9a41a1d12130c8e5d82d9c52a6974bae08bcdf06d9808e1728b1\" pid:2991 exit_status:137 exited_at:{seconds:1757696707 nanos:6621141}" Sep 12 17:05:07.008718 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-591346ebd68f2014821ef554e89785d19f4ff28dafdf0168c4896cddde05ac68-rootfs.mount: Deactivated successfully. Sep 12 17:05:07.018958 containerd[1504]: time="2025-09-12T17:05:07.018907006Z" level=info msg="StopContainer for \"591346ebd68f2014821ef554e89785d19f4ff28dafdf0168c4896cddde05ac68\" returns successfully" Sep 12 17:05:07.019503 containerd[1504]: time="2025-09-12T17:05:07.019458245Z" level=info msg="StopPodSandbox for \"ec12b42ea0750d4030cf8e5de6af1c02af452d9c541f9192f6e0f6ef30e54278\"" Sep 12 17:05:07.019548 containerd[1504]: time="2025-09-12T17:05:07.019524685Z" level=info msg="Container to stop \"cb76c436ef976881afbdab377e9e3c15261103b92f85fa1506cac3fc18d4e34d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:05:07.019548 containerd[1504]: time="2025-09-12T17:05:07.019536165Z" level=info msg="Container to stop \"889cffc6fb8aa0a4d512f6c5fa51d3803d5498beeb5d8d777f5697bdf849a69d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:05:07.019548 containerd[1504]: time="2025-09-12T17:05:07.019545245Z" level=info msg="Container to stop \"3ead40f3bb443c293d41f3a5c2ee127ccd39f60966bec09e5ff2f8df0f374479\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:05:07.019620 containerd[1504]: time="2025-09-12T17:05:07.019553325Z" level=info msg="Container to stop \"76ee87e2d7d701a38bbb6d3557eafafa09ffb9dea5c4fd90887f753bbb6fac77\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:05:07.019620 containerd[1504]: time="2025-09-12T17:05:07.019561125Z" level=info msg="Container to stop \"591346ebd68f2014821ef554e89785d19f4ff28dafdf0168c4896cddde05ac68\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:05:07.025911 systemd[1]: cri-containerd-ec12b42ea0750d4030cf8e5de6af1c02af452d9c541f9192f6e0f6ef30e54278.scope: Deactivated successfully. Sep 12 17:05:07.041089 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-657c36cf6aac9a41a1d12130c8e5d82d9c52a6974bae08bcdf06d9808e1728b1-rootfs.mount: Deactivated successfully. Sep 12 17:05:07.047045 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec12b42ea0750d4030cf8e5de6af1c02af452d9c541f9192f6e0f6ef30e54278-rootfs.mount: Deactivated successfully. Sep 12 17:05:07.057723 containerd[1504]: time="2025-09-12T17:05:07.057685679Z" level=info msg="shim disconnected" id=657c36cf6aac9a41a1d12130c8e5d82d9c52a6974bae08bcdf06d9808e1728b1 namespace=k8s.io Sep 12 17:05:07.063110 containerd[1504]: time="2025-09-12T17:05:07.057769239Z" level=warning msg="cleaning up after shim disconnected" id=657c36cf6aac9a41a1d12130c8e5d82d9c52a6974bae08bcdf06d9808e1728b1 namespace=k8s.io Sep 12 17:05:07.063494 containerd[1504]: time="2025-09-12T17:05:07.063317592Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:05:07.063494 containerd[1504]: time="2025-09-12T17:05:07.057975639Z" level=info msg="shim disconnected" id=ec12b42ea0750d4030cf8e5de6af1c02af452d9c541f9192f6e0f6ef30e54278 namespace=k8s.io Sep 12 17:05:07.063494 containerd[1504]: time="2025-09-12T17:05:07.063424512Z" level=warning msg="cleaning up after shim disconnected" id=ec12b42ea0750d4030cf8e5de6af1c02af452d9c541f9192f6e0f6ef30e54278 namespace=k8s.io Sep 12 17:05:07.063494 containerd[1504]: time="2025-09-12T17:05:07.063451352Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:05:07.087007 containerd[1504]: time="2025-09-12T17:05:07.086947963Z" level=error msg="Failed to handle event container_id:\"657c36cf6aac9a41a1d12130c8e5d82d9c52a6974bae08bcdf06d9808e1728b1\" id:\"657c36cf6aac9a41a1d12130c8e5d82d9c52a6974bae08bcdf06d9808e1728b1\" pid:2991 exit_status:137 exited_at:{seconds:1757696707 nanos:6621141} for 657c36cf6aac9a41a1d12130c8e5d82d9c52a6974bae08bcdf06d9808e1728b1" error="failed to handle container TaskExit event: failed to stop sandbox: failed to delete task: ttrpc: closed" Sep 12 17:05:07.087125 containerd[1504]: time="2025-09-12T17:05:07.087013483Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ec12b42ea0750d4030cf8e5de6af1c02af452d9c541f9192f6e0f6ef30e54278\" id:\"ec12b42ea0750d4030cf8e5de6af1c02af452d9c541f9192f6e0f6ef30e54278\" pid:3023 exit_status:137 exited_at:{seconds:1757696707 nanos:27180876}" Sep 12 17:05:07.088499 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ec12b42ea0750d4030cf8e5de6af1c02af452d9c541f9192f6e0f6ef30e54278-shm.mount: Deactivated successfully. Sep 12 17:05:07.088837 containerd[1504]: time="2025-09-12T17:05:07.088750881Z" level=info msg="TearDown network for sandbox \"657c36cf6aac9a41a1d12130c8e5d82d9c52a6974bae08bcdf06d9808e1728b1\" successfully" Sep 12 17:05:07.088837 containerd[1504]: time="2025-09-12T17:05:07.088777401Z" level=info msg="StopPodSandbox for \"657c36cf6aac9a41a1d12130c8e5d82d9c52a6974bae08bcdf06d9808e1728b1\" returns successfully" Sep 12 17:05:07.088966 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-657c36cf6aac9a41a1d12130c8e5d82d9c52a6974bae08bcdf06d9808e1728b1-shm.mount: Deactivated successfully. Sep 12 17:05:07.089529 containerd[1504]: time="2025-09-12T17:05:07.089434840Z" level=info msg="TearDown network for sandbox \"ec12b42ea0750d4030cf8e5de6af1c02af452d9c541f9192f6e0f6ef30e54278\" successfully" Sep 12 17:05:07.089529 containerd[1504]: time="2025-09-12T17:05:07.089459600Z" level=info msg="StopPodSandbox for \"ec12b42ea0750d4030cf8e5de6af1c02af452d9c541f9192f6e0f6ef30e54278\" returns successfully" Sep 12 17:05:07.094116 containerd[1504]: time="2025-09-12T17:05:07.093751915Z" level=info msg="received exit event sandbox_id:\"657c36cf6aac9a41a1d12130c8e5d82d9c52a6974bae08bcdf06d9808e1728b1\" exit_status:137 exited_at:{seconds:1757696707 nanos:6621141}" Sep 12 17:05:07.094116 containerd[1504]: time="2025-09-12T17:05:07.093802995Z" level=info msg="received exit event sandbox_id:\"ec12b42ea0750d4030cf8e5de6af1c02af452d9c541f9192f6e0f6ef30e54278\" exit_status:137 exited_at:{seconds:1757696707 nanos:27180876}" Sep 12 17:05:07.285594 kubelet[2654]: I0912 17:05:07.285467 2654 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4-cilium-config-path\") pod \"4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4\" (UID: \"4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4\") " Sep 12 17:05:07.285594 kubelet[2654]: I0912 17:05:07.285520 2654 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drlsr\" (UniqueName: \"kubernetes.io/projected/54afb15d-0fb0-4b39-ad6e-ee5e5fded456-kube-api-access-drlsr\") pod \"54afb15d-0fb0-4b39-ad6e-ee5e5fded456\" (UID: \"54afb15d-0fb0-4b39-ad6e-ee5e5fded456\") " Sep 12 17:05:07.285594 kubelet[2654]: I0912 17:05:07.285539 2654 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/54afb15d-0fb0-4b39-ad6e-ee5e5fded456-cilium-config-path\") pod \"54afb15d-0fb0-4b39-ad6e-ee5e5fded456\" (UID: \"54afb15d-0fb0-4b39-ad6e-ee5e5fded456\") " Sep 12 17:05:07.285594 kubelet[2654]: I0912 17:05:07.285557 2654 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4-clustermesh-secrets\") pod \"4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4\" (UID: \"4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4\") " Sep 12 17:05:07.285594 kubelet[2654]: I0912 17:05:07.285577 2654 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4-hostproc\") pod \"4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4\" (UID: \"4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4\") " Sep 12 17:05:07.285594 kubelet[2654]: I0912 17:05:07.285591 2654 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4-cilium-run\") pod \"4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4\" (UID: \"4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4\") " Sep 12 17:05:07.285864 kubelet[2654]: I0912 17:05:07.285606 2654 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4-host-proc-sys-net\") pod \"4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4\" (UID: \"4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4\") " Sep 12 17:05:07.285864 kubelet[2654]: I0912 17:05:07.285621 2654 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4-bpf-maps\") pod \"4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4\" (UID: \"4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4\") " Sep 12 17:05:07.285864 kubelet[2654]: I0912 17:05:07.285635 2654 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4-cni-path\") pod \"4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4\" (UID: \"4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4\") " Sep 12 17:05:07.285864 kubelet[2654]: I0912 17:05:07.285672 2654 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xqqj\" (UniqueName: \"kubernetes.io/projected/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4-kube-api-access-9xqqj\") pod \"4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4\" (UID: \"4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4\") " Sep 12 17:05:07.285864 kubelet[2654]: I0912 17:05:07.285689 2654 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4-hubble-tls\") pod \"4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4\" (UID: \"4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4\") " Sep 12 17:05:07.285864 kubelet[2654]: I0912 17:05:07.285706 2654 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4-lib-modules\") pod \"4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4\" (UID: \"4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4\") " Sep 12 17:05:07.285988 kubelet[2654]: I0912 17:05:07.285719 2654 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4-etc-cni-netd\") pod \"4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4\" (UID: \"4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4\") " Sep 12 17:05:07.285988 kubelet[2654]: I0912 17:05:07.285735 2654 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4-xtables-lock\") pod \"4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4\" (UID: \"4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4\") " Sep 12 17:05:07.285988 kubelet[2654]: I0912 17:05:07.285752 2654 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4-cilium-cgroup\") pod \"4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4\" (UID: \"4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4\") " Sep 12 17:05:07.285988 kubelet[2654]: I0912 17:05:07.285766 2654 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4-host-proc-sys-kernel\") pod \"4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4\" (UID: \"4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4\") " Sep 12 17:05:07.289689 kubelet[2654]: I0912 17:05:07.289294 2654 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4" (UID: "4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:05:07.289689 kubelet[2654]: I0912 17:05:07.289290 2654 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4" (UID: "4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:05:07.289689 kubelet[2654]: I0912 17:05:07.289320 2654 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4-cni-path" (OuterVolumeSpecName: "cni-path") pod "4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4" (UID: "4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:05:07.289689 kubelet[2654]: I0912 17:05:07.289326 2654 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4" (UID: "4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:05:07.289689 kubelet[2654]: I0912 17:05:07.289343 2654 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4" (UID: "4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:05:07.289888 kubelet[2654]: I0912 17:05:07.289363 2654 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4" (UID: "4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:05:07.289888 kubelet[2654]: I0912 17:05:07.289674 2654 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4" (UID: "4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:05:07.289888 kubelet[2654]: I0912 17:05:07.289688 2654 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4-hostproc" (OuterVolumeSpecName: "hostproc") pod "4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4" (UID: "4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:05:07.289888 kubelet[2654]: I0912 17:05:07.289711 2654 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4" (UID: "4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:05:07.289888 kubelet[2654]: I0912 17:05:07.289713 2654 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4" (UID: "4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:05:07.291676 kubelet[2654]: I0912 17:05:07.291439 2654 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4" (UID: "4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 12 17:05:07.292974 kubelet[2654]: I0912 17:05:07.292913 2654 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4" (UID: "4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 12 17:05:07.292974 kubelet[2654]: I0912 17:05:07.292926 2654 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4-kube-api-access-9xqqj" (OuterVolumeSpecName: "kube-api-access-9xqqj") pod "4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4" (UID: "4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4"). InnerVolumeSpecName "kube-api-access-9xqqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 17:05:07.293228 kubelet[2654]: I0912 17:05:07.293203 2654 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54afb15d-0fb0-4b39-ad6e-ee5e5fded456-kube-api-access-drlsr" (OuterVolumeSpecName: "kube-api-access-drlsr") pod "54afb15d-0fb0-4b39-ad6e-ee5e5fded456" (UID: "54afb15d-0fb0-4b39-ad6e-ee5e5fded456"). InnerVolumeSpecName "kube-api-access-drlsr". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 17:05:07.293378 kubelet[2654]: I0912 17:05:07.293352 2654 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54afb15d-0fb0-4b39-ad6e-ee5e5fded456-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "54afb15d-0fb0-4b39-ad6e-ee5e5fded456" (UID: "54afb15d-0fb0-4b39-ad6e-ee5e5fded456"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 12 17:05:07.295045 kubelet[2654]: I0912 17:05:07.295012 2654 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4" (UID: "4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 17:05:07.386895 kubelet[2654]: I0912 17:05:07.386719 2654 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drlsr\" (UniqueName: \"kubernetes.io/projected/54afb15d-0fb0-4b39-ad6e-ee5e5fded456-kube-api-access-drlsr\") on node \"localhost\" DevicePath \"\"" Sep 12 17:05:07.386895 kubelet[2654]: I0912 17:05:07.386755 2654 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 12 17:05:07.386895 kubelet[2654]: I0912 17:05:07.386767 2654 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 12 17:05:07.386895 kubelet[2654]: I0912 17:05:07.386775 2654 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/54afb15d-0fb0-4b39-ad6e-ee5e5fded456-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 12 17:05:07.386895 kubelet[2654]: I0912 17:05:07.386784 2654 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 12 17:05:07.386895 kubelet[2654]: I0912 17:05:07.386792 2654 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 12 17:05:07.386895 kubelet[2654]: I0912 17:05:07.386800 2654 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 12 17:05:07.386895 kubelet[2654]: I0912 17:05:07.386807 2654 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 12 17:05:07.387179 kubelet[2654]: I0912 17:05:07.386816 2654 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 12 17:05:07.387179 kubelet[2654]: I0912 17:05:07.386823 2654 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 12 17:05:07.387179 kubelet[2654]: I0912 17:05:07.386831 2654 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xqqj\" (UniqueName: \"kubernetes.io/projected/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4-kube-api-access-9xqqj\") on node \"localhost\" DevicePath \"\"" Sep 12 17:05:07.387179 kubelet[2654]: I0912 17:05:07.386839 2654 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 12 17:05:07.387179 kubelet[2654]: I0912 17:05:07.386846 2654 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 12 17:05:07.387179 kubelet[2654]: I0912 17:05:07.386855 2654 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 12 17:05:07.387179 kubelet[2654]: I0912 17:05:07.386863 2654 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 12 17:05:07.387179 kubelet[2654]: I0912 17:05:07.386870 2654 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 12 17:05:07.839518 systemd[1]: Removed slice kubepods-besteffort-pod54afb15d_0fb0_4b39_ad6e_ee5e5fded456.slice - libcontainer container kubepods-besteffort-pod54afb15d_0fb0_4b39_ad6e_ee5e5fded456.slice. Sep 12 17:05:07.840875 systemd[1]: Removed slice kubepods-burstable-pod4cbc3c74_cbc1_4824_b63f_cf5d5bfe09c4.slice - libcontainer container kubepods-burstable-pod4cbc3c74_cbc1_4824_b63f_cf5d5bfe09c4.slice. Sep 12 17:05:07.840961 systemd[1]: kubepods-burstable-pod4cbc3c74_cbc1_4824_b63f_cf5d5bfe09c4.slice: Consumed 6.313s CPU time, 123.5M memory peak, 176K read from disk, 12.9M written to disk. Sep 12 17:05:07.967496 systemd[1]: var-lib-kubelet-pods-4cbc3c74\x2dcbc1\x2d4824\x2db63f\x2dcf5d5bfe09c4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9xqqj.mount: Deactivated successfully. Sep 12 17:05:07.967600 systemd[1]: var-lib-kubelet-pods-54afb15d\x2d0fb0\x2d4b39\x2dad6e\x2dee5e5fded456-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddrlsr.mount: Deactivated successfully. Sep 12 17:05:07.967674 systemd[1]: var-lib-kubelet-pods-4cbc3c74\x2dcbc1\x2d4824\x2db63f\x2dcf5d5bfe09c4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 12 17:05:07.967728 systemd[1]: var-lib-kubelet-pods-4cbc3c74\x2dcbc1\x2d4824\x2db63f\x2dcf5d5bfe09c4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 12 17:05:08.086963 kubelet[2654]: I0912 17:05:08.086907 2654 scope.go:117] "RemoveContainer" containerID="591346ebd68f2014821ef554e89785d19f4ff28dafdf0168c4896cddde05ac68" Sep 12 17:05:08.090554 containerd[1504]: time="2025-09-12T17:05:08.090474330Z" level=info msg="RemoveContainer for \"591346ebd68f2014821ef554e89785d19f4ff28dafdf0168c4896cddde05ac68\"" Sep 12 17:05:08.097902 containerd[1504]: time="2025-09-12T17:05:08.097871444Z" level=info msg="RemoveContainer for \"591346ebd68f2014821ef554e89785d19f4ff28dafdf0168c4896cddde05ac68\" returns successfully" Sep 12 17:05:08.098233 kubelet[2654]: I0912 17:05:08.098136 2654 scope.go:117] "RemoveContainer" containerID="76ee87e2d7d701a38bbb6d3557eafafa09ffb9dea5c4fd90887f753bbb6fac77" Sep 12 17:05:08.099825 containerd[1504]: time="2025-09-12T17:05:08.099784882Z" level=info msg="RemoveContainer for \"76ee87e2d7d701a38bbb6d3557eafafa09ffb9dea5c4fd90887f753bbb6fac77\"" Sep 12 17:05:08.104536 containerd[1504]: time="2025-09-12T17:05:08.104500998Z" level=info msg="RemoveContainer for \"76ee87e2d7d701a38bbb6d3557eafafa09ffb9dea5c4fd90887f753bbb6fac77\" returns successfully" Sep 12 17:05:08.104747 kubelet[2654]: I0912 17:05:08.104725 2654 scope.go:117] "RemoveContainer" containerID="889cffc6fb8aa0a4d512f6c5fa51d3803d5498beeb5d8d777f5697bdf849a69d" Sep 12 17:05:08.107495 containerd[1504]: time="2025-09-12T17:05:08.107471435Z" level=info msg="RemoveContainer for \"889cffc6fb8aa0a4d512f6c5fa51d3803d5498beeb5d8d777f5697bdf849a69d\"" Sep 12 17:05:08.114668 containerd[1504]: time="2025-09-12T17:05:08.113911109Z" level=info msg="RemoveContainer for \"889cffc6fb8aa0a4d512f6c5fa51d3803d5498beeb5d8d777f5697bdf849a69d\" returns successfully" Sep 12 17:05:08.115689 kubelet[2654]: I0912 17:05:08.115628 2654 scope.go:117] "RemoveContainer" containerID="3ead40f3bb443c293d41f3a5c2ee127ccd39f60966bec09e5ff2f8df0f374479" Sep 12 17:05:08.117613 containerd[1504]: time="2025-09-12T17:05:08.117588626Z" level=info msg="RemoveContainer for \"3ead40f3bb443c293d41f3a5c2ee127ccd39f60966bec09e5ff2f8df0f374479\"" Sep 12 17:05:08.120442 containerd[1504]: time="2025-09-12T17:05:08.120416263Z" level=info msg="RemoveContainer for \"3ead40f3bb443c293d41f3a5c2ee127ccd39f60966bec09e5ff2f8df0f374479\" returns successfully" Sep 12 17:05:08.120600 kubelet[2654]: I0912 17:05:08.120577 2654 scope.go:117] "RemoveContainer" containerID="cb76c436ef976881afbdab377e9e3c15261103b92f85fa1506cac3fc18d4e34d" Sep 12 17:05:08.122567 containerd[1504]: time="2025-09-12T17:05:08.122140462Z" level=info msg="RemoveContainer for \"cb76c436ef976881afbdab377e9e3c15261103b92f85fa1506cac3fc18d4e34d\"" Sep 12 17:05:08.124562 containerd[1504]: time="2025-09-12T17:05:08.124533140Z" level=info msg="RemoveContainer for \"cb76c436ef976881afbdab377e9e3c15261103b92f85fa1506cac3fc18d4e34d\" returns successfully" Sep 12 17:05:08.124828 kubelet[2654]: I0912 17:05:08.124802 2654 scope.go:117] "RemoveContainer" containerID="80cfb91ada6d693be8e743a5464457d2281bc2188cf519db9fed2b924962ee0d" Sep 12 17:05:08.126297 containerd[1504]: time="2025-09-12T17:05:08.126274298Z" level=info msg="RemoveContainer for \"80cfb91ada6d693be8e743a5464457d2281bc2188cf519db9fed2b924962ee0d\"" Sep 12 17:05:08.128784 containerd[1504]: time="2025-09-12T17:05:08.128754376Z" level=info msg="RemoveContainer for \"80cfb91ada6d693be8e743a5464457d2281bc2188cf519db9fed2b924962ee0d\" returns successfully" Sep 12 17:05:08.872529 sshd[4268]: Connection closed by 10.0.0.1 port 58230 Sep 12 17:05:08.872900 sshd-session[4265]: pam_unix(sshd:session): session closed for user core Sep 12 17:05:08.883722 systemd[1]: sshd@22-10.0.0.14:22-10.0.0.1:58230.service: Deactivated successfully. Sep 12 17:05:08.885820 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 17:05:08.886482 systemd-logind[1486]: Session 23 logged out. Waiting for processes to exit. Sep 12 17:05:08.888567 systemd[1]: Started sshd@23-10.0.0.14:22-10.0.0.1:58246.service - OpenSSH per-connection server daemon (10.0.0.1:58246). Sep 12 17:05:08.889406 systemd-logind[1486]: Removed session 23. Sep 12 17:05:08.945016 sshd[4423]: Accepted publickey for core from 10.0.0.1 port 58246 ssh2: RSA SHA256:zGEfOPQeThQpVzN9sa62qLAN/hzBbgDs44dXJ/syq0Y Sep 12 17:05:08.946294 sshd-session[4423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:05:08.950084 systemd-logind[1486]: New session 24 of user core. Sep 12 17:05:08.959860 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 17:05:08.974861 containerd[1504]: time="2025-09-12T17:05:08.974728931Z" level=info msg="TaskExit event in podsandbox handler container_id:\"657c36cf6aac9a41a1d12130c8e5d82d9c52a6974bae08bcdf06d9808e1728b1\" id:\"657c36cf6aac9a41a1d12130c8e5d82d9c52a6974bae08bcdf06d9808e1728b1\" pid:2991 exit_status:137 exited_at:{seconds:1757696707 nanos:6621141}" Sep 12 17:05:09.707095 sshd[4426]: Connection closed by 10.0.0.1 port 58246 Sep 12 17:05:09.707683 sshd-session[4423]: pam_unix(sshd:session): session closed for user core Sep 12 17:05:09.717631 systemd[1]: sshd@23-10.0.0.14:22-10.0.0.1:58246.service: Deactivated successfully. Sep 12 17:05:09.723496 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 17:05:09.728038 systemd-logind[1486]: Session 24 logged out. Waiting for processes to exit. Sep 12 17:05:09.732729 systemd[1]: Started sshd@24-10.0.0.14:22-10.0.0.1:58254.service - OpenSSH per-connection server daemon (10.0.0.1:58254). Sep 12 17:05:09.735669 systemd-logind[1486]: Removed session 24. Sep 12 17:05:09.739433 kubelet[2654]: E0912 17:05:09.738994 2654 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="54afb15d-0fb0-4b39-ad6e-ee5e5fded456" containerName="cilium-operator" Sep 12 17:05:09.739433 kubelet[2654]: E0912 17:05:09.739037 2654 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4" containerName="apply-sysctl-overwrites" Sep 12 17:05:09.739433 kubelet[2654]: E0912 17:05:09.739047 2654 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4" containerName="cilium-agent" Sep 12 17:05:09.739433 kubelet[2654]: E0912 17:05:09.739054 2654 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4" containerName="mount-cgroup" Sep 12 17:05:09.739433 kubelet[2654]: E0912 17:05:09.739059 2654 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4" containerName="mount-bpf-fs" Sep 12 17:05:09.739433 kubelet[2654]: E0912 17:05:09.739065 2654 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4" containerName="clean-cilium-state" Sep 12 17:05:09.739433 kubelet[2654]: I0912 17:05:09.739087 2654 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4" containerName="cilium-agent" Sep 12 17:05:09.739433 kubelet[2654]: I0912 17:05:09.739095 2654 memory_manager.go:354] "RemoveStaleState removing state" podUID="54afb15d-0fb0-4b39-ad6e-ee5e5fded456" containerName="cilium-operator" Sep 12 17:05:09.756771 systemd[1]: Created slice kubepods-burstable-podc96dbcf3_98b3_433d_93ea_7a45a432aae1.slice - libcontainer container kubepods-burstable-podc96dbcf3_98b3_433d_93ea_7a45a432aae1.slice. Sep 12 17:05:09.803998 sshd[4439]: Accepted publickey for core from 10.0.0.1 port 58254 ssh2: RSA SHA256:zGEfOPQeThQpVzN9sa62qLAN/hzBbgDs44dXJ/syq0Y Sep 12 17:05:09.805438 sshd-session[4439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:05:09.809255 systemd-logind[1486]: New session 25 of user core. Sep 12 17:05:09.813821 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 17:05:09.834351 kubelet[2654]: I0912 17:05:09.834306 2654 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4" path="/var/lib/kubelet/pods/4cbc3c74-cbc1-4824-b63f-cf5d5bfe09c4/volumes" Sep 12 17:05:09.834872 kubelet[2654]: I0912 17:05:09.834853 2654 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54afb15d-0fb0-4b39-ad6e-ee5e5fded456" path="/var/lib/kubelet/pods/54afb15d-0fb0-4b39-ad6e-ee5e5fded456/volumes" Sep 12 17:05:09.864357 sshd[4442]: Connection closed by 10.0.0.1 port 58254 Sep 12 17:05:09.863801 sshd-session[4439]: pam_unix(sshd:session): session closed for user core Sep 12 17:05:09.873362 systemd[1]: sshd@24-10.0.0.14:22-10.0.0.1:58254.service: Deactivated successfully. Sep 12 17:05:09.875263 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 17:05:09.876059 systemd-logind[1486]: Session 25 logged out. Waiting for processes to exit. Sep 12 17:05:09.879205 systemd[1]: Started sshd@25-10.0.0.14:22-10.0.0.1:58256.service - OpenSSH per-connection server daemon (10.0.0.1:58256). Sep 12 17:05:09.880393 systemd-logind[1486]: Removed session 25. Sep 12 17:05:09.902708 kubelet[2654]: I0912 17:05:09.902622 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c96dbcf3-98b3-433d-93ea-7a45a432aae1-bpf-maps\") pod \"cilium-c7wjz\" (UID: \"c96dbcf3-98b3-433d-93ea-7a45a432aae1\") " pod="kube-system/cilium-c7wjz" Sep 12 17:05:09.902804 kubelet[2654]: I0912 17:05:09.902724 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c96dbcf3-98b3-433d-93ea-7a45a432aae1-cilium-cgroup\") pod \"cilium-c7wjz\" (UID: \"c96dbcf3-98b3-433d-93ea-7a45a432aae1\") " pod="kube-system/cilium-c7wjz" Sep 12 17:05:09.902804 kubelet[2654]: I0912 17:05:09.902759 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c96dbcf3-98b3-433d-93ea-7a45a432aae1-hubble-tls\") pod \"cilium-c7wjz\" (UID: \"c96dbcf3-98b3-433d-93ea-7a45a432aae1\") " pod="kube-system/cilium-c7wjz" Sep 12 17:05:09.902804 kubelet[2654]: I0912 17:05:09.902779 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c96dbcf3-98b3-433d-93ea-7a45a432aae1-host-proc-sys-net\") pod \"cilium-c7wjz\" (UID: \"c96dbcf3-98b3-433d-93ea-7a45a432aae1\") " pod="kube-system/cilium-c7wjz" Sep 12 17:05:09.902804 kubelet[2654]: I0912 17:05:09.902796 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c96dbcf3-98b3-433d-93ea-7a45a432aae1-cni-path\") pod \"cilium-c7wjz\" (UID: \"c96dbcf3-98b3-433d-93ea-7a45a432aae1\") " pod="kube-system/cilium-c7wjz" Sep 12 17:05:09.902921 kubelet[2654]: I0912 17:05:09.902811 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c96dbcf3-98b3-433d-93ea-7a45a432aae1-clustermesh-secrets\") pod \"cilium-c7wjz\" (UID: \"c96dbcf3-98b3-433d-93ea-7a45a432aae1\") " pod="kube-system/cilium-c7wjz" Sep 12 17:05:09.902921 kubelet[2654]: I0912 17:05:09.902873 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c96dbcf3-98b3-433d-93ea-7a45a432aae1-cilium-run\") pod \"cilium-c7wjz\" (UID: \"c96dbcf3-98b3-433d-93ea-7a45a432aae1\") " pod="kube-system/cilium-c7wjz" Sep 12 17:05:09.902921 kubelet[2654]: I0912 17:05:09.902912 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx8gx\" (UniqueName: \"kubernetes.io/projected/c96dbcf3-98b3-433d-93ea-7a45a432aae1-kube-api-access-sx8gx\") pod \"cilium-c7wjz\" (UID: \"c96dbcf3-98b3-433d-93ea-7a45a432aae1\") " pod="kube-system/cilium-c7wjz" Sep 12 17:05:09.902982 kubelet[2654]: I0912 17:05:09.902931 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c96dbcf3-98b3-433d-93ea-7a45a432aae1-hostproc\") pod \"cilium-c7wjz\" (UID: \"c96dbcf3-98b3-433d-93ea-7a45a432aae1\") " pod="kube-system/cilium-c7wjz" Sep 12 17:05:09.902982 kubelet[2654]: I0912 17:05:09.902947 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c96dbcf3-98b3-433d-93ea-7a45a432aae1-etc-cni-netd\") pod \"cilium-c7wjz\" (UID: \"c96dbcf3-98b3-433d-93ea-7a45a432aae1\") " pod="kube-system/cilium-c7wjz" Sep 12 17:05:09.902982 kubelet[2654]: I0912 17:05:09.902964 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c96dbcf3-98b3-433d-93ea-7a45a432aae1-host-proc-sys-kernel\") pod \"cilium-c7wjz\" (UID: \"c96dbcf3-98b3-433d-93ea-7a45a432aae1\") " pod="kube-system/cilium-c7wjz" Sep 12 17:05:09.903038 kubelet[2654]: I0912 17:05:09.902990 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c96dbcf3-98b3-433d-93ea-7a45a432aae1-cilium-config-path\") pod \"cilium-c7wjz\" (UID: \"c96dbcf3-98b3-433d-93ea-7a45a432aae1\") " pod="kube-system/cilium-c7wjz" Sep 12 17:05:09.903038 kubelet[2654]: I0912 17:05:09.903007 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c96dbcf3-98b3-433d-93ea-7a45a432aae1-xtables-lock\") pod \"cilium-c7wjz\" (UID: \"c96dbcf3-98b3-433d-93ea-7a45a432aae1\") " pod="kube-system/cilium-c7wjz" Sep 12 17:05:09.903038 kubelet[2654]: I0912 17:05:09.903022 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c96dbcf3-98b3-433d-93ea-7a45a432aae1-cilium-ipsec-secrets\") pod \"cilium-c7wjz\" (UID: \"c96dbcf3-98b3-433d-93ea-7a45a432aae1\") " pod="kube-system/cilium-c7wjz" Sep 12 17:05:09.903098 kubelet[2654]: I0912 17:05:09.903038 2654 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c96dbcf3-98b3-433d-93ea-7a45a432aae1-lib-modules\") pod \"cilium-c7wjz\" (UID: \"c96dbcf3-98b3-433d-93ea-7a45a432aae1\") " pod="kube-system/cilium-c7wjz" Sep 12 17:05:09.941981 sshd[4449]: Accepted publickey for core from 10.0.0.1 port 58256 ssh2: RSA SHA256:zGEfOPQeThQpVzN9sa62qLAN/hzBbgDs44dXJ/syq0Y Sep 12 17:05:09.943703 sshd-session[4449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:05:09.947999 systemd-logind[1486]: New session 26 of user core. Sep 12 17:05:09.958808 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 17:05:10.062592 containerd[1504]: time="2025-09-12T17:05:10.062377007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c7wjz,Uid:c96dbcf3-98b3-433d-93ea-7a45a432aae1,Namespace:kube-system,Attempt:0,}" Sep 12 17:05:10.090291 containerd[1504]: time="2025-09-12T17:05:10.090199718Z" level=info msg="connecting to shim e19cdfbdc977321f070c562d4ca273635d4da42939db47a29b52b18327023e14" address="unix:///run/containerd/s/7b09f8cd42aff6753be44bb9847d4884236e37b05d84425cf71046b6bba1e39f" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:05:10.115860 systemd[1]: Started cri-containerd-e19cdfbdc977321f070c562d4ca273635d4da42939db47a29b52b18327023e14.scope - libcontainer container e19cdfbdc977321f070c562d4ca273635d4da42939db47a29b52b18327023e14. Sep 12 17:05:10.140343 containerd[1504]: time="2025-09-12T17:05:10.140300422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c7wjz,Uid:c96dbcf3-98b3-433d-93ea-7a45a432aae1,Namespace:kube-system,Attempt:0,} returns sandbox id \"e19cdfbdc977321f070c562d4ca273635d4da42939db47a29b52b18327023e14\"" Sep 12 17:05:10.142929 containerd[1504]: time="2025-09-12T17:05:10.142896942Z" level=info msg="CreateContainer within sandbox \"e19cdfbdc977321f070c562d4ca273635d4da42939db47a29b52b18327023e14\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 17:05:10.149674 containerd[1504]: time="2025-09-12T17:05:10.149448740Z" level=info msg="Container 4c19c714bf46192f2f96ae728dcd476af99eaf2486e7a5a02ad3d4489360ba1c: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:05:10.155824 containerd[1504]: time="2025-09-12T17:05:10.155773658Z" level=info msg="CreateContainer within sandbox \"e19cdfbdc977321f070c562d4ca273635d4da42939db47a29b52b18327023e14\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4c19c714bf46192f2f96ae728dcd476af99eaf2486e7a5a02ad3d4489360ba1c\"" Sep 12 17:05:10.156808 containerd[1504]: time="2025-09-12T17:05:10.156778457Z" level=info msg="StartContainer for \"4c19c714bf46192f2f96ae728dcd476af99eaf2486e7a5a02ad3d4489360ba1c\"" Sep 12 17:05:10.157878 containerd[1504]: time="2025-09-12T17:05:10.157844857Z" level=info msg="connecting to shim 4c19c714bf46192f2f96ae728dcd476af99eaf2486e7a5a02ad3d4489360ba1c" address="unix:///run/containerd/s/7b09f8cd42aff6753be44bb9847d4884236e37b05d84425cf71046b6bba1e39f" protocol=ttrpc version=3 Sep 12 17:05:10.180863 systemd[1]: Started cri-containerd-4c19c714bf46192f2f96ae728dcd476af99eaf2486e7a5a02ad3d4489360ba1c.scope - libcontainer container 4c19c714bf46192f2f96ae728dcd476af99eaf2486e7a5a02ad3d4489360ba1c. Sep 12 17:05:10.214995 systemd[1]: cri-containerd-4c19c714bf46192f2f96ae728dcd476af99eaf2486e7a5a02ad3d4489360ba1c.scope: Deactivated successfully. Sep 12 17:05:10.218926 containerd[1504]: time="2025-09-12T17:05:10.218881918Z" level=info msg="received exit event container_id:\"4c19c714bf46192f2f96ae728dcd476af99eaf2486e7a5a02ad3d4489360ba1c\" id:\"4c19c714bf46192f2f96ae728dcd476af99eaf2486e7a5a02ad3d4489360ba1c\" pid:4520 exited_at:{seconds:1757696710 nanos:218552358}" Sep 12 17:05:10.219393 containerd[1504]: time="2025-09-12T17:05:10.219122358Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4c19c714bf46192f2f96ae728dcd476af99eaf2486e7a5a02ad3d4489360ba1c\" id:\"4c19c714bf46192f2f96ae728dcd476af99eaf2486e7a5a02ad3d4489360ba1c\" pid:4520 exited_at:{seconds:1757696710 nanos:218552358}" Sep 12 17:05:10.226345 containerd[1504]: time="2025-09-12T17:05:10.226310636Z" level=info msg="StartContainer for \"4c19c714bf46192f2f96ae728dcd476af99eaf2486e7a5a02ad3d4489360ba1c\" returns successfully" Sep 12 17:05:11.105208 containerd[1504]: time="2025-09-12T17:05:11.105127914Z" level=info msg="CreateContainer within sandbox \"e19cdfbdc977321f070c562d4ca273635d4da42939db47a29b52b18327023e14\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 17:05:11.132043 containerd[1504]: time="2025-09-12T17:05:11.132001273Z" level=info msg="Container 70db587f0cd06983d5f80dad10a1a539cb5103858ae2c7838a68d73ec0a2b080: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:05:11.139909 containerd[1504]: time="2025-09-12T17:05:11.139730833Z" level=info msg="CreateContainer within sandbox \"e19cdfbdc977321f070c562d4ca273635d4da42939db47a29b52b18327023e14\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"70db587f0cd06983d5f80dad10a1a539cb5103858ae2c7838a68d73ec0a2b080\"" Sep 12 17:05:11.140335 containerd[1504]: time="2025-09-12T17:05:11.140304833Z" level=info msg="StartContainer for \"70db587f0cd06983d5f80dad10a1a539cb5103858ae2c7838a68d73ec0a2b080\"" Sep 12 17:05:11.141395 containerd[1504]: time="2025-09-12T17:05:11.141106233Z" level=info msg="connecting to shim 70db587f0cd06983d5f80dad10a1a539cb5103858ae2c7838a68d73ec0a2b080" address="unix:///run/containerd/s/7b09f8cd42aff6753be44bb9847d4884236e37b05d84425cf71046b6bba1e39f" protocol=ttrpc version=3 Sep 12 17:05:11.163869 systemd[1]: Started cri-containerd-70db587f0cd06983d5f80dad10a1a539cb5103858ae2c7838a68d73ec0a2b080.scope - libcontainer container 70db587f0cd06983d5f80dad10a1a539cb5103858ae2c7838a68d73ec0a2b080. Sep 12 17:05:11.190003 containerd[1504]: time="2025-09-12T17:05:11.189939352Z" level=info msg="StartContainer for \"70db587f0cd06983d5f80dad10a1a539cb5103858ae2c7838a68d73ec0a2b080\" returns successfully" Sep 12 17:05:11.197585 systemd[1]: cri-containerd-70db587f0cd06983d5f80dad10a1a539cb5103858ae2c7838a68d73ec0a2b080.scope: Deactivated successfully. Sep 12 17:05:11.198705 containerd[1504]: time="2025-09-12T17:05:11.198558791Z" level=info msg="received exit event container_id:\"70db587f0cd06983d5f80dad10a1a539cb5103858ae2c7838a68d73ec0a2b080\" id:\"70db587f0cd06983d5f80dad10a1a539cb5103858ae2c7838a68d73ec0a2b080\" pid:4568 exited_at:{seconds:1757696711 nanos:198096631}" Sep 12 17:05:11.198804 containerd[1504]: time="2025-09-12T17:05:11.198775151Z" level=info msg="TaskExit event in podsandbox handler container_id:\"70db587f0cd06983d5f80dad10a1a539cb5103858ae2c7838a68d73ec0a2b080\" id:\"70db587f0cd06983d5f80dad10a1a539cb5103858ae2c7838a68d73ec0a2b080\" pid:4568 exited_at:{seconds:1757696711 nanos:198096631}" Sep 12 17:05:11.217634 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-70db587f0cd06983d5f80dad10a1a539cb5103858ae2c7838a68d73ec0a2b080-rootfs.mount: Deactivated successfully. Sep 12 17:05:11.910971 kubelet[2654]: E0912 17:05:11.910899 2654 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 12 17:05:12.113923 containerd[1504]: time="2025-09-12T17:05:12.113856759Z" level=info msg="CreateContainer within sandbox \"e19cdfbdc977321f070c562d4ca273635d4da42939db47a29b52b18327023e14\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 17:05:12.124292 containerd[1504]: time="2025-09-12T17:05:12.124249882Z" level=info msg="Container bc9e7cee20feeb7a8374a1e753cbbc94de7efa4bb87446f89328e6874b705ac1: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:05:12.134981 containerd[1504]: time="2025-09-12T17:05:12.134901885Z" level=info msg="CreateContainer within sandbox \"e19cdfbdc977321f070c562d4ca273635d4da42939db47a29b52b18327023e14\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bc9e7cee20feeb7a8374a1e753cbbc94de7efa4bb87446f89328e6874b705ac1\"" Sep 12 17:05:12.135596 containerd[1504]: time="2025-09-12T17:05:12.135564845Z" level=info msg="StartContainer for \"bc9e7cee20feeb7a8374a1e753cbbc94de7efa4bb87446f89328e6874b705ac1\"" Sep 12 17:05:12.136949 containerd[1504]: time="2025-09-12T17:05:12.136920565Z" level=info msg="connecting to shim bc9e7cee20feeb7a8374a1e753cbbc94de7efa4bb87446f89328e6874b705ac1" address="unix:///run/containerd/s/7b09f8cd42aff6753be44bb9847d4884236e37b05d84425cf71046b6bba1e39f" protocol=ttrpc version=3 Sep 12 17:05:12.158808 systemd[1]: Started cri-containerd-bc9e7cee20feeb7a8374a1e753cbbc94de7efa4bb87446f89328e6874b705ac1.scope - libcontainer container bc9e7cee20feeb7a8374a1e753cbbc94de7efa4bb87446f89328e6874b705ac1. Sep 12 17:05:12.198030 systemd[1]: cri-containerd-bc9e7cee20feeb7a8374a1e753cbbc94de7efa4bb87446f89328e6874b705ac1.scope: Deactivated successfully. Sep 12 17:05:12.200983 containerd[1504]: time="2025-09-12T17:05:12.200940621Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bc9e7cee20feeb7a8374a1e753cbbc94de7efa4bb87446f89328e6874b705ac1\" id:\"bc9e7cee20feeb7a8374a1e753cbbc94de7efa4bb87446f89328e6874b705ac1\" pid:4612 exited_at:{seconds:1757696712 nanos:200692341}" Sep 12 17:05:12.201674 containerd[1504]: time="2025-09-12T17:05:12.201179061Z" level=info msg="StartContainer for \"bc9e7cee20feeb7a8374a1e753cbbc94de7efa4bb87446f89328e6874b705ac1\" returns successfully" Sep 12 17:05:12.206348 containerd[1504]: time="2025-09-12T17:05:12.206297303Z" level=info msg="received exit event container_id:\"bc9e7cee20feeb7a8374a1e753cbbc94de7efa4bb87446f89328e6874b705ac1\" id:\"bc9e7cee20feeb7a8374a1e753cbbc94de7efa4bb87446f89328e6874b705ac1\" pid:4612 exited_at:{seconds:1757696712 nanos:200692341}" Sep 12 17:05:12.223778 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc9e7cee20feeb7a8374a1e753cbbc94de7efa4bb87446f89328e6874b705ac1-rootfs.mount: Deactivated successfully. Sep 12 17:05:13.120127 containerd[1504]: time="2025-09-12T17:05:13.119533442Z" level=info msg="CreateContainer within sandbox \"e19cdfbdc977321f070c562d4ca273635d4da42939db47a29b52b18327023e14\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 17:05:13.136913 containerd[1504]: time="2025-09-12T17:05:13.136867931Z" level=info msg="Container ab579075457c42559eb2e3a228200ef19c9ea34fcf754ec174ce186911256003: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:05:13.144732 containerd[1504]: time="2025-09-12T17:05:13.144680375Z" level=info msg="CreateContainer within sandbox \"e19cdfbdc977321f070c562d4ca273635d4da42939db47a29b52b18327023e14\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ab579075457c42559eb2e3a228200ef19c9ea34fcf754ec174ce186911256003\"" Sep 12 17:05:13.146364 containerd[1504]: time="2025-09-12T17:05:13.145180336Z" level=info msg="StartContainer for \"ab579075457c42559eb2e3a228200ef19c9ea34fcf754ec174ce186911256003\"" Sep 12 17:05:13.146364 containerd[1504]: time="2025-09-12T17:05:13.146127576Z" level=info msg="connecting to shim ab579075457c42559eb2e3a228200ef19c9ea34fcf754ec174ce186911256003" address="unix:///run/containerd/s/7b09f8cd42aff6753be44bb9847d4884236e37b05d84425cf71046b6bba1e39f" protocol=ttrpc version=3 Sep 12 17:05:13.177843 systemd[1]: Started cri-containerd-ab579075457c42559eb2e3a228200ef19c9ea34fcf754ec174ce186911256003.scope - libcontainer container ab579075457c42559eb2e3a228200ef19c9ea34fcf754ec174ce186911256003. Sep 12 17:05:13.207518 systemd[1]: cri-containerd-ab579075457c42559eb2e3a228200ef19c9ea34fcf754ec174ce186911256003.scope: Deactivated successfully. Sep 12 17:05:13.209955 containerd[1504]: time="2025-09-12T17:05:13.209923249Z" level=info msg="received exit event container_id:\"ab579075457c42559eb2e3a228200ef19c9ea34fcf754ec174ce186911256003\" id:\"ab579075457c42559eb2e3a228200ef19c9ea34fcf754ec174ce186911256003\" pid:4653 exited_at:{seconds:1757696713 nanos:209254209}" Sep 12 17:05:13.210425 containerd[1504]: time="2025-09-12T17:05:13.210131969Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ab579075457c42559eb2e3a228200ef19c9ea34fcf754ec174ce186911256003\" id:\"ab579075457c42559eb2e3a228200ef19c9ea34fcf754ec174ce186911256003\" pid:4653 exited_at:{seconds:1757696713 nanos:209254209}" Sep 12 17:05:13.217093 containerd[1504]: time="2025-09-12T17:05:13.217054613Z" level=info msg="StartContainer for \"ab579075457c42559eb2e3a228200ef19c9ea34fcf754ec174ce186911256003\" returns successfully" Sep 12 17:05:13.233160 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab579075457c42559eb2e3a228200ef19c9ea34fcf754ec174ce186911256003-rootfs.mount: Deactivated successfully. Sep 12 17:05:13.658105 kubelet[2654]: I0912 17:05:13.658064 2654 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-12T17:05:13Z","lastTransitionTime":"2025-09-12T17:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 12 17:05:14.126958 containerd[1504]: time="2025-09-12T17:05:14.126882235Z" level=info msg="CreateContainer within sandbox \"e19cdfbdc977321f070c562d4ca273635d4da42939db47a29b52b18327023e14\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 17:05:14.137530 containerd[1504]: time="2025-09-12T17:05:14.137483163Z" level=info msg="Container 5ff205ef5a1c531293b71fa71d05f8cf4a091ca82d324a196352d3913a7fd2ea: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:05:14.148951 containerd[1504]: time="2025-09-12T17:05:14.148892292Z" level=info msg="CreateContainer within sandbox \"e19cdfbdc977321f070c562d4ca273635d4da42939db47a29b52b18327023e14\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5ff205ef5a1c531293b71fa71d05f8cf4a091ca82d324a196352d3913a7fd2ea\"" Sep 12 17:05:14.149974 containerd[1504]: time="2025-09-12T17:05:14.149943853Z" level=info msg="StartContainer for \"5ff205ef5a1c531293b71fa71d05f8cf4a091ca82d324a196352d3913a7fd2ea\"" Sep 12 17:05:14.151102 containerd[1504]: time="2025-09-12T17:05:14.151077813Z" level=info msg="connecting to shim 5ff205ef5a1c531293b71fa71d05f8cf4a091ca82d324a196352d3913a7fd2ea" address="unix:///run/containerd/s/7b09f8cd42aff6753be44bb9847d4884236e37b05d84425cf71046b6bba1e39f" protocol=ttrpc version=3 Sep 12 17:05:14.176819 systemd[1]: Started cri-containerd-5ff205ef5a1c531293b71fa71d05f8cf4a091ca82d324a196352d3913a7fd2ea.scope - libcontainer container 5ff205ef5a1c531293b71fa71d05f8cf4a091ca82d324a196352d3913a7fd2ea. Sep 12 17:05:14.204305 containerd[1504]: time="2025-09-12T17:05:14.204264015Z" level=info msg="StartContainer for \"5ff205ef5a1c531293b71fa71d05f8cf4a091ca82d324a196352d3913a7fd2ea\" returns successfully" Sep 12 17:05:14.255765 containerd[1504]: time="2025-09-12T17:05:14.255717334Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5ff205ef5a1c531293b71fa71d05f8cf4a091ca82d324a196352d3913a7fd2ea\" id:\"2bbca37c99ccc2fb7bf0e537134ea94a4eb07a58d0e4d45c3171fb0fb7447d41\" pid:4720 exited_at:{seconds:1757696714 nanos:255401174}" Sep 12 17:05:14.477672 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 12 17:05:15.156069 kubelet[2654]: I0912 17:05:15.156006 2654 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-c7wjz" podStartSLOduration=6.15597239 podStartE2EDuration="6.15597239s" podCreationTimestamp="2025-09-12 17:05:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:05:15.154736829 +0000 UTC m=+83.449558437" watchObservedRunningTime="2025-09-12 17:05:15.15597239 +0000 UTC m=+83.450793998" Sep 12 17:05:16.349667 containerd[1504]: time="2025-09-12T17:05:16.349614377Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5ff205ef5a1c531293b71fa71d05f8cf4a091ca82d324a196352d3913a7fd2ea\" id:\"5c1f713932c62151981069e2ca635496319c6ad21c8b5f7c96ca0ef732865249\" pid:4885 exit_status:1 exited_at:{seconds:1757696716 nanos:349346777}" Sep 12 17:05:17.361024 systemd-networkd[1440]: lxc_health: Link UP Sep 12 17:05:17.366957 systemd-networkd[1440]: lxc_health: Gained carrier Sep 12 17:05:18.507431 containerd[1504]: time="2025-09-12T17:05:18.507366978Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5ff205ef5a1c531293b71fa71d05f8cf4a091ca82d324a196352d3913a7fd2ea\" id:\"9c6d2f34d01a4e9903b2d866e42cf94349cc94632a8b0844dee40ca83aa62b27\" pid:5260 exited_at:{seconds:1757696718 nanos:507086497}" Sep 12 17:05:18.526845 systemd-networkd[1440]: lxc_health: Gained IPv6LL Sep 12 17:05:20.620934 containerd[1504]: time="2025-09-12T17:05:20.620895919Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5ff205ef5a1c531293b71fa71d05f8cf4a091ca82d324a196352d3913a7fd2ea\" id:\"4f2727b0ffc9f82be678d7a55a97cb443bd7e88e57e5cc5c08e4a63b725906ff\" pid:5289 exited_at:{seconds:1757696720 nanos:620399638}" Sep 12 17:05:20.623279 kubelet[2654]: E0912 17:05:20.623242 2654 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:50756->127.0.0.1:35159: write tcp 127.0.0.1:50756->127.0.0.1:35159: write: broken pipe Sep 12 17:05:22.737531 containerd[1504]: time="2025-09-12T17:05:22.737313479Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5ff205ef5a1c531293b71fa71d05f8cf4a091ca82d324a196352d3913a7fd2ea\" id:\"2370f00de6a1ad1b79c51d67994b3b7637371968b4555dc1e8132021a7364e8e\" pid:5319 exited_at:{seconds:1757696722 nanos:736488277}" Sep 12 17:05:24.861939 containerd[1504]: time="2025-09-12T17:05:24.861894578Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5ff205ef5a1c531293b71fa71d05f8cf4a091ca82d324a196352d3913a7fd2ea\" id:\"94493e468bf4c91b2e46ae35475e604c6791decc994c45b88e6ed1a764df2a77\" pid:5343 exited_at:{seconds:1757696724 nanos:861242496}" Sep 12 17:05:24.894128 sshd[4452]: Connection closed by 10.0.0.1 port 58256 Sep 12 17:05:24.894653 sshd-session[4449]: pam_unix(sshd:session): session closed for user core Sep 12 17:05:24.898252 systemd[1]: sshd@25-10.0.0.14:22-10.0.0.1:58256.service: Deactivated successfully. Sep 12 17:05:24.900849 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 17:05:24.901894 systemd-logind[1486]: Session 26 logged out. Waiting for processes to exit. Sep 12 17:05:24.903356 systemd-logind[1486]: Removed session 26.