Sep 9 00:19:10.853096 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 9 00:19:10.853117 kernel: Linux version 6.6.104-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Sep 8 22:48:00 -00 2025 Sep 9 00:19:10.853127 kernel: KASLR enabled Sep 9 00:19:10.853132 kernel: efi: EFI v2.7 by EDK II Sep 9 00:19:10.853138 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Sep 9 00:19:10.853144 kernel: random: crng init done Sep 9 00:19:10.853151 kernel: ACPI: Early table checksum verification disabled Sep 9 00:19:10.853157 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Sep 9 00:19:10.853163 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 9 00:19:10.853170 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:19:10.853177 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:19:10.853183 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:19:10.853189 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:19:10.853195 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:19:10.853203 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:19:10.853210 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:19:10.853217 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:19:10.853223 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:19:10.853229 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 9 00:19:10.853236 kernel: NUMA: Failed to initialise from firmware Sep 9 00:19:10.853242 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 00:19:10.853248 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Sep 9 00:19:10.853255 kernel: Zone ranges: Sep 9 00:19:10.853261 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 00:19:10.853267 kernel: DMA32 empty Sep 9 00:19:10.853274 kernel: Normal empty Sep 9 00:19:10.853280 kernel: Movable zone start for each node Sep 9 00:19:10.853287 kernel: Early memory node ranges Sep 9 00:19:10.853293 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Sep 9 00:19:10.853299 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Sep 9 00:19:10.853306 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Sep 9 00:19:10.853312 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 9 00:19:10.853318 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 9 00:19:10.853325 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 9 00:19:10.853331 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 9 00:19:10.853337 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 00:19:10.853343 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 9 00:19:10.853351 kernel: psci: probing for conduit method from ACPI. Sep 9 00:19:10.853357 kernel: psci: PSCIv1.1 detected in firmware. Sep 9 00:19:10.853364 kernel: psci: Using standard PSCI v0.2 function IDs Sep 9 00:19:10.853373 kernel: psci: Trusted OS migration not required Sep 9 00:19:10.853379 kernel: psci: SMC Calling Convention v1.1 Sep 9 00:19:10.853386 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 9 00:19:10.853394 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 9 00:19:10.853400 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 9 00:19:10.853407 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 9 00:19:10.853414 kernel: Detected PIPT I-cache on CPU0 Sep 9 00:19:10.853420 kernel: CPU features: detected: GIC system register CPU interface Sep 9 00:19:10.853427 kernel: CPU features: detected: Hardware dirty bit management Sep 9 00:19:10.853434 kernel: CPU features: detected: Spectre-v4 Sep 9 00:19:10.853441 kernel: CPU features: detected: Spectre-BHB Sep 9 00:19:10.853447 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 9 00:19:10.853454 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 9 00:19:10.853462 kernel: CPU features: detected: ARM erratum 1418040 Sep 9 00:19:10.853469 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 9 00:19:10.853476 kernel: alternatives: applying boot alternatives Sep 9 00:19:10.853483 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=7395fe4f9fb368b2829f9349e2a89e9a9e96b552675d3b261a5a30cf3c6cb15c Sep 9 00:19:10.853490 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 00:19:10.853497 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 00:19:10.853504 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 00:19:10.853510 kernel: Fallback order for Node 0: 0 Sep 9 00:19:10.853517 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Sep 9 00:19:10.853524 kernel: Policy zone: DMA Sep 9 00:19:10.853530 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 00:19:10.853538 kernel: software IO TLB: area num 4. Sep 9 00:19:10.853545 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Sep 9 00:19:10.853552 kernel: Memory: 2386404K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 185884K reserved, 0K cma-reserved) Sep 9 00:19:10.853559 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 9 00:19:10.853649 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 00:19:10.853658 kernel: rcu: RCU event tracing is enabled. Sep 9 00:19:10.853665 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 9 00:19:10.853671 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 00:19:10.853678 kernel: Tracing variant of Tasks RCU enabled. Sep 9 00:19:10.853685 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 00:19:10.853692 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 9 00:19:10.853702 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 9 00:19:10.853709 kernel: GICv3: 256 SPIs implemented Sep 9 00:19:10.853715 kernel: GICv3: 0 Extended SPIs implemented Sep 9 00:19:10.853722 kernel: Root IRQ handler: gic_handle_irq Sep 9 00:19:10.853729 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 9 00:19:10.853735 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 9 00:19:10.853742 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 9 00:19:10.853749 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Sep 9 00:19:10.853756 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Sep 9 00:19:10.853762 kernel: GICv3: using LPI property table @0x00000000400f0000 Sep 9 00:19:10.853769 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Sep 9 00:19:10.853776 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 00:19:10.853784 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 00:19:10.853791 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 9 00:19:10.853798 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 9 00:19:10.853805 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 9 00:19:10.853811 kernel: arm-pv: using stolen time PV Sep 9 00:19:10.853818 kernel: Console: colour dummy device 80x25 Sep 9 00:19:10.853826 kernel: ACPI: Core revision 20230628 Sep 9 00:19:10.853833 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 9 00:19:10.853840 kernel: pid_max: default: 32768 minimum: 301 Sep 9 00:19:10.853846 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 9 00:19:10.853854 kernel: landlock: Up and running. Sep 9 00:19:10.853861 kernel: SELinux: Initializing. Sep 9 00:19:10.853868 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 00:19:10.853875 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 00:19:10.853882 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 00:19:10.853889 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 00:19:10.853896 kernel: rcu: Hierarchical SRCU implementation. Sep 9 00:19:10.853903 kernel: rcu: Max phase no-delay instances is 400. Sep 9 00:19:10.853910 kernel: Platform MSI: ITS@0x8080000 domain created Sep 9 00:19:10.853918 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 9 00:19:10.853925 kernel: Remapping and enabling EFI services. Sep 9 00:19:10.853932 kernel: smp: Bringing up secondary CPUs ... Sep 9 00:19:10.853939 kernel: Detected PIPT I-cache on CPU1 Sep 9 00:19:10.853946 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 9 00:19:10.853952 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Sep 9 00:19:10.853959 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 00:19:10.853966 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 9 00:19:10.853974 kernel: Detected PIPT I-cache on CPU2 Sep 9 00:19:10.853981 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 9 00:19:10.853989 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Sep 9 00:19:10.853996 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 00:19:10.854008 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 9 00:19:10.854016 kernel: Detected PIPT I-cache on CPU3 Sep 9 00:19:10.854024 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 9 00:19:10.854031 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Sep 9 00:19:10.854038 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 00:19:10.854045 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 9 00:19:10.854053 kernel: smp: Brought up 1 node, 4 CPUs Sep 9 00:19:10.854061 kernel: SMP: Total of 4 processors activated. Sep 9 00:19:10.854068 kernel: CPU features: detected: 32-bit EL0 Support Sep 9 00:19:10.854076 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 9 00:19:10.854083 kernel: CPU features: detected: Common not Private translations Sep 9 00:19:10.854090 kernel: CPU features: detected: CRC32 instructions Sep 9 00:19:10.854097 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 9 00:19:10.854105 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 9 00:19:10.854112 kernel: CPU features: detected: LSE atomic instructions Sep 9 00:19:10.854120 kernel: CPU features: detected: Privileged Access Never Sep 9 00:19:10.854127 kernel: CPU features: detected: RAS Extension Support Sep 9 00:19:10.854135 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 9 00:19:10.854142 kernel: CPU: All CPU(s) started at EL1 Sep 9 00:19:10.854149 kernel: alternatives: applying system-wide alternatives Sep 9 00:19:10.854156 kernel: devtmpfs: initialized Sep 9 00:19:10.854164 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 00:19:10.854171 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 9 00:19:10.854178 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 00:19:10.854187 kernel: SMBIOS 3.0.0 present. Sep 9 00:19:10.854194 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Sep 9 00:19:10.854201 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 00:19:10.854209 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 9 00:19:10.854216 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 9 00:19:10.854223 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 9 00:19:10.854230 kernel: audit: initializing netlink subsys (disabled) Sep 9 00:19:10.854238 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Sep 9 00:19:10.854245 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 00:19:10.854253 kernel: cpuidle: using governor menu Sep 9 00:19:10.854261 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 9 00:19:10.854268 kernel: ASID allocator initialised with 32768 entries Sep 9 00:19:10.854275 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 00:19:10.854283 kernel: Serial: AMBA PL011 UART driver Sep 9 00:19:10.854290 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 9 00:19:10.854297 kernel: Modules: 0 pages in range for non-PLT usage Sep 9 00:19:10.854304 kernel: Modules: 509008 pages in range for PLT usage Sep 9 00:19:10.854312 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 00:19:10.854320 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 00:19:10.854327 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 9 00:19:10.854335 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 9 00:19:10.854342 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 00:19:10.854349 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 00:19:10.854356 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 9 00:19:10.854363 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 9 00:19:10.854370 kernel: ACPI: Added _OSI(Module Device) Sep 9 00:19:10.854378 kernel: ACPI: Added _OSI(Processor Device) Sep 9 00:19:10.854386 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 00:19:10.854394 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 00:19:10.854401 kernel: ACPI: Interpreter enabled Sep 9 00:19:10.854408 kernel: ACPI: Using GIC for interrupt routing Sep 9 00:19:10.854415 kernel: ACPI: MCFG table detected, 1 entries Sep 9 00:19:10.854422 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 9 00:19:10.854430 kernel: printk: console [ttyAMA0] enabled Sep 9 00:19:10.854437 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 00:19:10.854586 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 00:19:10.854682 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 9 00:19:10.854752 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 9 00:19:10.854816 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 9 00:19:10.854879 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 9 00:19:10.854889 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 9 00:19:10.854897 kernel: PCI host bridge to bus 0000:00 Sep 9 00:19:10.854967 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 9 00:19:10.855031 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 9 00:19:10.855089 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 9 00:19:10.855148 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 00:19:10.855233 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 9 00:19:10.855308 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Sep 9 00:19:10.855375 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Sep 9 00:19:10.855443 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Sep 9 00:19:10.855509 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 9 00:19:10.855586 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 9 00:19:10.855661 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Sep 9 00:19:10.855728 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Sep 9 00:19:10.855787 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 9 00:19:10.855845 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 9 00:19:10.855907 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 9 00:19:10.855916 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 9 00:19:10.855924 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 9 00:19:10.855931 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 9 00:19:10.855939 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 9 00:19:10.855947 kernel: iommu: Default domain type: Translated Sep 9 00:19:10.855954 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 9 00:19:10.855961 kernel: efivars: Registered efivars operations Sep 9 00:19:10.855969 kernel: vgaarb: loaded Sep 9 00:19:10.855978 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 9 00:19:10.855986 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 00:19:10.855993 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 00:19:10.856000 kernel: pnp: PnP ACPI init Sep 9 00:19:10.856075 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 9 00:19:10.856086 kernel: pnp: PnP ACPI: found 1 devices Sep 9 00:19:10.856093 kernel: NET: Registered PF_INET protocol family Sep 9 00:19:10.856101 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 00:19:10.856111 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 9 00:19:10.856118 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 00:19:10.856126 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 00:19:10.856133 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 9 00:19:10.856141 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 9 00:19:10.856148 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 00:19:10.856156 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 00:19:10.856163 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 00:19:10.856170 kernel: PCI: CLS 0 bytes, default 64 Sep 9 00:19:10.856179 kernel: kvm [1]: HYP mode not available Sep 9 00:19:10.856187 kernel: Initialise system trusted keyrings Sep 9 00:19:10.856195 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 9 00:19:10.856202 kernel: Key type asymmetric registered Sep 9 00:19:10.856210 kernel: Asymmetric key parser 'x509' registered Sep 9 00:19:10.856217 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 9 00:19:10.856235 kernel: io scheduler mq-deadline registered Sep 9 00:19:10.856243 kernel: io scheduler kyber registered Sep 9 00:19:10.856251 kernel: io scheduler bfq registered Sep 9 00:19:10.856260 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 9 00:19:10.856267 kernel: ACPI: button: Power Button [PWRB] Sep 9 00:19:10.856275 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 9 00:19:10.856342 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 9 00:19:10.856353 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 00:19:10.856360 kernel: thunder_xcv, ver 1.0 Sep 9 00:19:10.856367 kernel: thunder_bgx, ver 1.0 Sep 9 00:19:10.856375 kernel: nicpf, ver 1.0 Sep 9 00:19:10.856382 kernel: nicvf, ver 1.0 Sep 9 00:19:10.856456 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 9 00:19:10.856518 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-09T00:19:10 UTC (1757377150) Sep 9 00:19:10.856528 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 9 00:19:10.856537 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 9 00:19:10.856545 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 9 00:19:10.856552 kernel: watchdog: Hard watchdog permanently disabled Sep 9 00:19:10.856559 kernel: NET: Registered PF_INET6 protocol family Sep 9 00:19:10.856575 kernel: Segment Routing with IPv6 Sep 9 00:19:10.856586 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 00:19:10.856593 kernel: NET: Registered PF_PACKET protocol family Sep 9 00:19:10.856600 kernel: Key type dns_resolver registered Sep 9 00:19:10.856607 kernel: registered taskstats version 1 Sep 9 00:19:10.856625 kernel: Loading compiled-in X.509 certificates Sep 9 00:19:10.856636 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.104-flatcar: f5b097e6797722e0cc665195a3c415b6be267631' Sep 9 00:19:10.856644 kernel: Key type .fscrypt registered Sep 9 00:19:10.856651 kernel: Key type fscrypt-provisioning registered Sep 9 00:19:10.856658 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 00:19:10.856668 kernel: ima: Allocated hash algorithm: sha1 Sep 9 00:19:10.856676 kernel: ima: No architecture policies found Sep 9 00:19:10.856683 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 9 00:19:10.856691 kernel: clk: Disabling unused clocks Sep 9 00:19:10.856698 kernel: Freeing unused kernel memory: 39424K Sep 9 00:19:10.856706 kernel: Run /init as init process Sep 9 00:19:10.856713 kernel: with arguments: Sep 9 00:19:10.856720 kernel: /init Sep 9 00:19:10.856727 kernel: with environment: Sep 9 00:19:10.856736 kernel: HOME=/ Sep 9 00:19:10.856744 kernel: TERM=linux Sep 9 00:19:10.856751 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 00:19:10.856760 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 9 00:19:10.856771 systemd[1]: Detected virtualization kvm. Sep 9 00:19:10.856779 systemd[1]: Detected architecture arm64. Sep 9 00:19:10.856787 systemd[1]: Running in initrd. Sep 9 00:19:10.856795 systemd[1]: No hostname configured, using default hostname. Sep 9 00:19:10.856804 systemd[1]: Hostname set to . Sep 9 00:19:10.856812 systemd[1]: Initializing machine ID from VM UUID. Sep 9 00:19:10.856820 systemd[1]: Queued start job for default target initrd.target. Sep 9 00:19:10.856828 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:19:10.856836 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:19:10.856845 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 00:19:10.856853 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 00:19:10.856862 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 00:19:10.856870 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 00:19:10.856880 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 00:19:10.856888 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 00:19:10.856896 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:19:10.856904 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:19:10.856912 systemd[1]: Reached target paths.target - Path Units. Sep 9 00:19:10.856921 systemd[1]: Reached target slices.target - Slice Units. Sep 9 00:19:10.856929 systemd[1]: Reached target swap.target - Swaps. Sep 9 00:19:10.856937 systemd[1]: Reached target timers.target - Timer Units. Sep 9 00:19:10.856945 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 00:19:10.856953 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 00:19:10.856961 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 00:19:10.856969 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 9 00:19:10.856977 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:19:10.856985 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 00:19:10.856995 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:19:10.857003 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 00:19:10.857011 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 00:19:10.857019 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 00:19:10.857026 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 00:19:10.857037 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 00:19:10.857045 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 00:19:10.857053 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 00:19:10.857062 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:19:10.857070 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 00:19:10.857078 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:19:10.857086 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 00:19:10.857121 systemd-journald[238]: Collecting audit messages is disabled. Sep 9 00:19:10.857141 systemd-journald[238]: Journal started Sep 9 00:19:10.857160 systemd-journald[238]: Runtime Journal (/run/log/journal/b852b31419b545d296db41bea88dd0a9) is 5.9M, max 47.3M, 41.4M free. Sep 9 00:19:10.863908 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 00:19:10.863933 kernel: Bridge firewalling registered Sep 9 00:19:10.863943 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 00:19:10.849069 systemd-modules-load[239]: Inserted module 'overlay' Sep 9 00:19:10.863000 systemd-modules-load[239]: Inserted module 'br_netfilter' Sep 9 00:19:10.867922 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 00:19:10.870621 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 00:19:10.871921 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:19:10.874024 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 00:19:10.878682 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 00:19:10.880418 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:19:10.884106 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 00:19:10.886798 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 00:19:10.894365 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:19:10.895780 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:19:10.898379 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:19:10.900888 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:19:10.917737 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 00:19:10.920054 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 00:19:10.928191 dracut-cmdline[277]: dracut-dracut-053 Sep 9 00:19:10.930658 dracut-cmdline[277]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=7395fe4f9fb368b2829f9349e2a89e9a9e96b552675d3b261a5a30cf3c6cb15c Sep 9 00:19:10.944714 systemd-resolved[279]: Positive Trust Anchors: Sep 9 00:19:10.944731 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:19:10.944764 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 00:19:10.949555 systemd-resolved[279]: Defaulting to hostname 'linux'. Sep 9 00:19:10.950638 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 00:19:10.955752 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:19:11.001592 kernel: SCSI subsystem initialized Sep 9 00:19:11.006583 kernel: Loading iSCSI transport class v2.0-870. Sep 9 00:19:11.013593 kernel: iscsi: registered transport (tcp) Sep 9 00:19:11.026594 kernel: iscsi: registered transport (qla4xxx) Sep 9 00:19:11.026610 kernel: QLogic iSCSI HBA Driver Sep 9 00:19:11.068368 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 00:19:11.082762 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 00:19:11.098825 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 00:19:11.098869 kernel: device-mapper: uevent: version 1.0.3 Sep 9 00:19:11.099651 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 9 00:19:11.144593 kernel: raid6: neonx8 gen() 15689 MB/s Sep 9 00:19:11.161589 kernel: raid6: neonx4 gen() 15625 MB/s Sep 9 00:19:11.178589 kernel: raid6: neonx2 gen() 13221 MB/s Sep 9 00:19:11.195578 kernel: raid6: neonx1 gen() 10458 MB/s Sep 9 00:19:11.212582 kernel: raid6: int64x8 gen() 6927 MB/s Sep 9 00:19:11.229581 kernel: raid6: int64x4 gen() 7324 MB/s Sep 9 00:19:11.246590 kernel: raid6: int64x2 gen() 6109 MB/s Sep 9 00:19:11.263591 kernel: raid6: int64x1 gen() 5034 MB/s Sep 9 00:19:11.263608 kernel: raid6: using algorithm neonx8 gen() 15689 MB/s Sep 9 00:19:11.281582 kernel: raid6: .... xor() 12053 MB/s, rmw enabled Sep 9 00:19:11.281596 kernel: raid6: using neon recovery algorithm Sep 9 00:19:11.286603 kernel: xor: measuring software checksum speed Sep 9 00:19:11.286637 kernel: 8regs : 18624 MB/sec Sep 9 00:19:11.287647 kernel: 32regs : 19236 MB/sec Sep 9 00:19:11.287661 kernel: arm64_neon : 27114 MB/sec Sep 9 00:19:11.287670 kernel: xor: using function: arm64_neon (27114 MB/sec) Sep 9 00:19:11.335589 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 00:19:11.346737 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 00:19:11.359808 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:19:11.371435 systemd-udevd[462]: Using default interface naming scheme 'v255'. Sep 9 00:19:11.374601 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:19:11.387739 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 00:19:11.400135 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Sep 9 00:19:11.425872 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 00:19:11.436767 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 00:19:11.477642 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:19:11.484734 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 00:19:11.497207 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 00:19:11.498750 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 00:19:11.500473 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:19:11.502801 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 00:19:11.514699 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 00:19:11.522582 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 9 00:19:11.522761 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 9 00:19:11.523992 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 00:19:11.529405 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 00:19:11.529509 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:19:11.533275 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 00:19:11.534537 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:19:11.540645 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 00:19:11.540669 kernel: GPT:9289727 != 19775487 Sep 9 00:19:11.540680 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 00:19:11.540689 kernel: GPT:9289727 != 19775487 Sep 9 00:19:11.540698 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 00:19:11.540708 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:19:11.534702 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:19:11.537909 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:19:11.550833 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:19:11.560578 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (522) Sep 9 00:19:11.561459 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:19:11.566595 kernel: BTRFS: device fsid 7c1eef97-905d-47ac-bb4a-010204f95541 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (507) Sep 9 00:19:11.566538 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 9 00:19:11.573842 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 9 00:19:11.581136 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 00:19:11.585093 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 9 00:19:11.586403 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 9 00:19:11.597698 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 00:19:11.599543 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 00:19:11.605370 disk-uuid[550]: Primary Header is updated. Sep 9 00:19:11.605370 disk-uuid[550]: Secondary Entries is updated. Sep 9 00:19:11.605370 disk-uuid[550]: Secondary Header is updated. Sep 9 00:19:11.609586 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:19:11.616595 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:19:11.620204 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:19:12.617602 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:19:12.619013 disk-uuid[551]: The operation has completed successfully. Sep 9 00:19:12.638828 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 00:19:12.638922 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 00:19:12.659792 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 00:19:12.662772 sh[572]: Success Sep 9 00:19:12.672606 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 9 00:19:12.699446 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 00:19:12.718082 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 00:19:12.720040 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 00:19:12.730417 kernel: BTRFS info (device dm-0): first mount of filesystem 7c1eef97-905d-47ac-bb4a-010204f95541 Sep 9 00:19:12.730456 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 9 00:19:12.730467 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 9 00:19:12.731381 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 00:19:12.732677 kernel: BTRFS info (device dm-0): using free space tree Sep 9 00:19:12.736055 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 00:19:12.737484 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 00:19:12.752867 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 00:19:12.755661 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 00:19:12.762889 kernel: BTRFS info (device vda6): first mount of filesystem 995cc93a-6fc6-4281-a722-821717f17817 Sep 9 00:19:12.762934 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 00:19:12.762951 kernel: BTRFS info (device vda6): using free space tree Sep 9 00:19:12.766974 kernel: BTRFS info (device vda6): auto enabling async discard Sep 9 00:19:12.774580 kernel: BTRFS info (device vda6): last unmount of filesystem 995cc93a-6fc6-4281-a722-821717f17817 Sep 9 00:19:12.774606 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 9 00:19:12.781134 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 00:19:12.789772 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 00:19:12.849849 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 00:19:12.858737 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 00:19:12.861151 ignition[663]: Ignition 2.19.0 Sep 9 00:19:12.861157 ignition[663]: Stage: fetch-offline Sep 9 00:19:12.861191 ignition[663]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:19:12.861199 ignition[663]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:19:12.861346 ignition[663]: parsed url from cmdline: "" Sep 9 00:19:12.861349 ignition[663]: no config URL provided Sep 9 00:19:12.861354 ignition[663]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 00:19:12.861361 ignition[663]: no config at "/usr/lib/ignition/user.ign" Sep 9 00:19:12.861383 ignition[663]: op(1): [started] loading QEMU firmware config module Sep 9 00:19:12.861388 ignition[663]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 9 00:19:12.868460 ignition[663]: op(1): [finished] loading QEMU firmware config module Sep 9 00:19:12.884315 systemd-networkd[763]: lo: Link UP Sep 9 00:19:12.884328 systemd-networkd[763]: lo: Gained carrier Sep 9 00:19:12.885040 systemd-networkd[763]: Enumeration completed Sep 9 00:19:12.885592 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 00:19:12.885701 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:19:12.885705 systemd-networkd[763]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 00:19:12.886480 systemd-networkd[763]: eth0: Link UP Sep 9 00:19:12.886484 systemd-networkd[763]: eth0: Gained carrier Sep 9 00:19:12.886491 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:19:12.887255 systemd[1]: Reached target network.target - Network. Sep 9 00:19:12.904637 systemd-networkd[763]: eth0: DHCPv4 address 10.0.0.53/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 00:19:12.923332 ignition[663]: parsing config with SHA512: ae603c1fdd97fc7c4f634e96bd07c90f40ac01b3e5eb73e08d23040d8d44547f53004aaa771b24c109fb1bd4ff9e0a528417c3ba610601574e87eb5729ecfbb1 Sep 9 00:19:12.928699 unknown[663]: fetched base config from "system" Sep 9 00:19:12.929267 ignition[663]: fetch-offline: fetch-offline passed Sep 9 00:19:12.928716 unknown[663]: fetched user config from "qemu" Sep 9 00:19:12.929384 ignition[663]: Ignition finished successfully Sep 9 00:19:12.932037 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 00:19:12.933426 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 9 00:19:12.938740 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 00:19:12.949989 ignition[770]: Ignition 2.19.0 Sep 9 00:19:12.949999 ignition[770]: Stage: kargs Sep 9 00:19:12.950178 ignition[770]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:19:12.950188 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:19:12.951053 ignition[770]: kargs: kargs passed Sep 9 00:19:12.954834 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 00:19:12.951097 ignition[770]: Ignition finished successfully Sep 9 00:19:12.968759 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 00:19:12.977980 ignition[778]: Ignition 2.19.0 Sep 9 00:19:12.977989 ignition[778]: Stage: disks Sep 9 00:19:12.978148 ignition[778]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:19:12.980604 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 00:19:12.978158 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:19:12.981943 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 00:19:12.979029 ignition[778]: disks: disks passed Sep 9 00:19:12.983458 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 00:19:12.979072 ignition[778]: Ignition finished successfully Sep 9 00:19:12.985481 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 00:19:12.987244 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 00:19:12.988437 systemd[1]: Reached target basic.target - Basic System. Sep 9 00:19:13.005750 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 00:19:13.015352 systemd-fsck[788]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 9 00:19:13.020142 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 00:19:13.032697 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 00:19:13.072352 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 00:19:13.073953 kernel: EXT4-fs (vda9): mounted filesystem d987a4c8-1278-4a59-9d40-0c91e08e9423 r/w with ordered data mode. Quota mode: none. Sep 9 00:19:13.073749 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 00:19:13.086684 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 00:19:13.088463 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 00:19:13.089981 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 9 00:19:13.090024 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 00:19:13.096441 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (796) Sep 9 00:19:13.090047 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 00:19:13.097261 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 00:19:13.099829 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 00:19:13.104431 kernel: BTRFS info (device vda6): first mount of filesystem 995cc93a-6fc6-4281-a722-821717f17817 Sep 9 00:19:13.104458 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 00:19:13.104477 kernel: BTRFS info (device vda6): using free space tree Sep 9 00:19:13.104487 kernel: BTRFS info (device vda6): auto enabling async discard Sep 9 00:19:13.105697 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 00:19:13.136868 initrd-setup-root[820]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 00:19:13.140531 initrd-setup-root[827]: cut: /sysroot/etc/group: No such file or directory Sep 9 00:19:13.144951 initrd-setup-root[834]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 00:19:13.148511 initrd-setup-root[841]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 00:19:13.218312 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 00:19:13.229709 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 00:19:13.232187 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 00:19:13.236580 kernel: BTRFS info (device vda6): last unmount of filesystem 995cc93a-6fc6-4281-a722-821717f17817 Sep 9 00:19:13.250693 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 00:19:13.255236 ignition[911]: INFO : Ignition 2.19.0 Sep 9 00:19:13.255236 ignition[911]: INFO : Stage: mount Sep 9 00:19:13.257926 ignition[911]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:19:13.257926 ignition[911]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:19:13.257926 ignition[911]: INFO : mount: mount passed Sep 9 00:19:13.257926 ignition[911]: INFO : Ignition finished successfully Sep 9 00:19:13.260621 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 00:19:13.271663 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 00:19:13.729296 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 00:19:13.740875 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 00:19:13.746596 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (925) Sep 9 00:19:13.750905 kernel: BTRFS info (device vda6): first mount of filesystem 995cc93a-6fc6-4281-a722-821717f17817 Sep 9 00:19:13.750946 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 00:19:13.750958 kernel: BTRFS info (device vda6): using free space tree Sep 9 00:19:13.753717 kernel: BTRFS info (device vda6): auto enabling async discard Sep 9 00:19:13.754372 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 00:19:13.771502 ignition[942]: INFO : Ignition 2.19.0 Sep 9 00:19:13.771502 ignition[942]: INFO : Stage: files Sep 9 00:19:13.773284 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:19:13.773284 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:19:13.773284 ignition[942]: DEBUG : files: compiled without relabeling support, skipping Sep 9 00:19:13.776778 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 00:19:13.776778 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 00:19:13.779683 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 00:19:13.779683 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 00:19:13.779683 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 00:19:13.779184 unknown[942]: wrote ssh authorized keys file for user: core Sep 9 00:19:13.784578 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 9 00:19:13.784578 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Sep 9 00:19:13.845764 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 00:19:14.062446 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 9 00:19:14.062446 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 00:19:14.066332 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 9 00:19:14.293745 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 9 00:19:14.553129 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 00:19:14.553129 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 9 00:19:14.557227 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 00:19:14.557227 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 00:19:14.557227 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 00:19:14.557227 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 00:19:14.557227 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 00:19:14.557227 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 00:19:14.557227 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 00:19:14.557227 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:19:14.557227 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:19:14.557227 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 9 00:19:14.557227 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 9 00:19:14.557227 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 9 00:19:14.557227 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 9 00:19:14.630759 systemd-networkd[763]: eth0: Gained IPv6LL Sep 9 00:19:14.848339 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 9 00:19:15.901520 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 9 00:19:15.901520 ignition[942]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 9 00:19:15.906074 ignition[942]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 00:19:15.906074 ignition[942]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 00:19:15.906074 ignition[942]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 9 00:19:15.906074 ignition[942]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 9 00:19:15.906074 ignition[942]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:19:15.906074 ignition[942]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:19:15.906074 ignition[942]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 9 00:19:15.906074 ignition[942]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 9 00:19:15.924738 ignition[942]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:19:15.928395 ignition[942]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:19:15.930444 ignition[942]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 9 00:19:15.930444 ignition[942]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 9 00:19:15.930444 ignition[942]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 00:19:15.930444 ignition[942]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:19:15.930444 ignition[942]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:19:15.930444 ignition[942]: INFO : files: files passed Sep 9 00:19:15.930444 ignition[942]: INFO : Ignition finished successfully Sep 9 00:19:15.931989 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 00:19:15.942716 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 00:19:15.944449 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 00:19:15.949957 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 00:19:15.950067 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 00:19:15.952803 initrd-setup-root-after-ignition[970]: grep: /sysroot/oem/oem-release: No such file or directory Sep 9 00:19:15.954582 initrd-setup-root-after-ignition[972]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:19:15.954582 initrd-setup-root-after-ignition[972]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:19:15.957350 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:19:15.957002 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 00:19:15.960838 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 00:19:15.972718 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 00:19:15.989964 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 00:19:15.990052 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 00:19:15.992199 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 00:19:15.993742 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 00:19:15.995277 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 00:19:15.995970 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 00:19:16.010297 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 00:19:16.018704 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 00:19:16.026118 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:19:16.027330 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:19:16.029083 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 00:19:16.030773 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 00:19:16.030876 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 00:19:16.033091 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 00:19:16.035161 systemd[1]: Stopped target basic.target - Basic System. Sep 9 00:19:16.036559 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 00:19:16.038183 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 00:19:16.040111 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 00:19:16.041812 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 00:19:16.043435 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 00:19:16.045358 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 00:19:16.047231 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 00:19:16.048926 systemd[1]: Stopped target swap.target - Swaps. Sep 9 00:19:16.050333 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 00:19:16.050440 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 00:19:16.052758 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:19:16.054702 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:19:16.056636 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 00:19:16.057653 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:19:16.059690 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 00:19:16.059793 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 00:19:16.062296 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 00:19:16.062400 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 00:19:16.064471 systemd[1]: Stopped target paths.target - Path Units. Sep 9 00:19:16.066111 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 00:19:16.069622 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:19:16.070705 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 00:19:16.072839 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 00:19:16.074237 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 00:19:16.074316 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 00:19:16.075832 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 00:19:16.075911 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 00:19:16.077450 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 00:19:16.077550 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 00:19:16.079370 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 00:19:16.079467 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 00:19:16.087804 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 00:19:16.089295 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 00:19:16.090320 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 00:19:16.090452 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:19:16.092353 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 00:19:16.092452 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 00:19:16.097691 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 00:19:16.098485 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 00:19:16.101030 ignition[997]: INFO : Ignition 2.19.0 Sep 9 00:19:16.101030 ignition[997]: INFO : Stage: umount Sep 9 00:19:16.101030 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:19:16.101030 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:19:16.105429 ignition[997]: INFO : umount: umount passed Sep 9 00:19:16.105429 ignition[997]: INFO : Ignition finished successfully Sep 9 00:19:16.104354 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 00:19:16.104804 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 00:19:16.106476 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 00:19:16.108835 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 00:19:16.108926 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 00:19:16.111546 systemd[1]: Stopped target network.target - Network. Sep 9 00:19:16.113914 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 00:19:16.113981 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 00:19:16.116167 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 00:19:16.116217 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 00:19:16.117727 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 00:19:16.117770 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 00:19:16.119601 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 00:19:16.119649 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 00:19:16.121365 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 00:19:16.121409 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 00:19:16.123311 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 00:19:16.124923 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 00:19:16.129122 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 00:19:16.129242 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 00:19:16.131352 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 00:19:16.131399 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:19:16.132638 systemd-networkd[763]: eth0: DHCPv6 lease lost Sep 9 00:19:16.134126 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 00:19:16.134239 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 00:19:16.135938 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 00:19:16.135968 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:19:16.149652 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 00:19:16.150473 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 00:19:16.150528 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 00:19:16.152454 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 00:19:16.152495 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:19:16.154319 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 00:19:16.154361 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 00:19:16.156736 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:19:16.165643 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 00:19:16.165750 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 00:19:16.183238 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 00:19:16.183371 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:19:16.185521 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 00:19:16.185559 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 00:19:16.187223 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 00:19:16.187254 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:19:16.188874 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 00:19:16.188918 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 00:19:16.191489 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 00:19:16.191532 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 00:19:16.194168 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 00:19:16.194210 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:19:16.206726 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 00:19:16.207729 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 00:19:16.207782 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:19:16.209761 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 9 00:19:16.209803 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 00:19:16.211508 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 00:19:16.211548 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:19:16.213701 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:19:16.213742 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:19:16.215992 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 00:19:16.216067 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 00:19:16.218041 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 00:19:16.220070 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 00:19:16.228555 systemd[1]: Switching root. Sep 9 00:19:16.255319 systemd-journald[238]: Journal stopped Sep 9 00:19:16.929160 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Sep 9 00:19:16.929226 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 00:19:16.929239 kernel: SELinux: policy capability open_perms=1 Sep 9 00:19:16.929249 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 00:19:16.929262 kernel: SELinux: policy capability always_check_network=0 Sep 9 00:19:16.929272 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 00:19:16.929282 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 00:19:16.929295 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 00:19:16.929304 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 00:19:16.929314 kernel: audit: type=1403 audit(1757377156.417:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 00:19:16.929325 systemd[1]: Successfully loaded SELinux policy in 32.275ms. Sep 9 00:19:16.929338 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.336ms. Sep 9 00:19:16.929349 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 9 00:19:16.929362 systemd[1]: Detected virtualization kvm. Sep 9 00:19:16.929372 systemd[1]: Detected architecture arm64. Sep 9 00:19:16.929383 systemd[1]: Detected first boot. Sep 9 00:19:16.929393 systemd[1]: Initializing machine ID from VM UUID. Sep 9 00:19:16.929406 zram_generator::config[1042]: No configuration found. Sep 9 00:19:16.929417 systemd[1]: Populated /etc with preset unit settings. Sep 9 00:19:16.929428 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 00:19:16.929438 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 00:19:16.929450 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 00:19:16.929462 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 00:19:16.929472 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 00:19:16.929482 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 00:19:16.929493 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 00:19:16.929503 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 00:19:16.929514 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 00:19:16.929524 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 00:19:16.929535 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 00:19:16.929547 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:19:16.929558 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:19:16.929598 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 00:19:16.929613 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 00:19:16.929623 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 00:19:16.929634 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 00:19:16.929645 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 9 00:19:16.929655 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:19:16.929665 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 00:19:16.929679 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 00:19:16.929691 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 00:19:16.929702 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 00:19:16.929713 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:19:16.929723 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 00:19:16.929734 systemd[1]: Reached target slices.target - Slice Units. Sep 9 00:19:16.929744 systemd[1]: Reached target swap.target - Swaps. Sep 9 00:19:16.929754 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 00:19:16.929766 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 00:19:16.929777 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:19:16.929787 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 00:19:16.929798 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:19:16.929808 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 00:19:16.929818 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 00:19:16.929829 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 00:19:16.929840 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 00:19:16.929850 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 00:19:16.929863 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 00:19:16.929873 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 00:19:16.929884 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 00:19:16.929894 systemd[1]: Reached target machines.target - Containers. Sep 9 00:19:16.929905 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 00:19:16.929916 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:19:16.929927 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 00:19:16.929969 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 00:19:16.929984 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:19:16.929994 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 00:19:16.930005 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:19:16.930015 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 00:19:16.930026 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:19:16.930037 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 00:19:16.930047 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 00:19:16.930058 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 00:19:16.930069 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 00:19:16.930081 kernel: fuse: init (API version 7.39) Sep 9 00:19:16.930091 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 00:19:16.930101 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 00:19:16.930111 kernel: loop: module loaded Sep 9 00:19:16.930121 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 00:19:16.930131 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 00:19:16.930141 kernel: ACPI: bus type drm_connector registered Sep 9 00:19:16.930151 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 00:19:16.930163 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 00:19:16.930196 systemd-journald[1113]: Collecting audit messages is disabled. Sep 9 00:19:16.930216 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 00:19:16.930227 systemd[1]: Stopped verity-setup.service. Sep 9 00:19:16.930238 systemd-journald[1113]: Journal started Sep 9 00:19:16.930258 systemd-journald[1113]: Runtime Journal (/run/log/journal/b852b31419b545d296db41bea88dd0a9) is 5.9M, max 47.3M, 41.4M free. Sep 9 00:19:16.750100 systemd[1]: Queued start job for default target multi-user.target. Sep 9 00:19:16.766844 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 9 00:19:16.767174 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 00:19:16.933748 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 00:19:16.934169 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 00:19:16.935320 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 00:19:16.936655 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 00:19:16.937830 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 00:19:16.939087 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 00:19:16.940325 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 00:19:16.942610 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 00:19:16.943980 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:19:16.946916 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 00:19:16.947058 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 00:19:16.948485 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:19:16.948641 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:19:16.950155 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:19:16.951612 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 00:19:16.952877 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:19:16.953004 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:19:16.954467 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 00:19:16.954625 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 00:19:16.955992 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:19:16.956114 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:19:16.957464 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 00:19:16.958915 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 00:19:16.960457 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 00:19:16.972432 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 00:19:16.981662 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 00:19:16.983664 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 00:19:16.984800 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 00:19:16.984842 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 00:19:16.986752 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 9 00:19:16.988918 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 00:19:16.991000 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 00:19:16.992174 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:19:16.993491 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 00:19:16.995292 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 00:19:16.996612 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:19:16.998764 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 00:19:16.999838 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 00:19:17.006664 systemd-journald[1113]: Time spent on flushing to /var/log/journal/b852b31419b545d296db41bea88dd0a9 is 34.936ms for 857 entries. Sep 9 00:19:17.006664 systemd-journald[1113]: System Journal (/var/log/journal/b852b31419b545d296db41bea88dd0a9) is 8.0M, max 195.6M, 187.6M free. Sep 9 00:19:17.056960 systemd-journald[1113]: Received client request to flush runtime journal. Sep 9 00:19:17.057013 kernel: loop0: detected capacity change from 0 to 114328 Sep 9 00:19:17.057039 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 00:19:17.003848 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:19:17.007999 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 00:19:17.011659 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 00:19:17.014396 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:19:17.020794 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 00:19:17.022132 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 00:19:17.023744 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 00:19:17.025271 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 00:19:17.029432 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 00:19:17.039882 systemd-tmpfiles[1154]: ACLs are not supported, ignoring. Sep 9 00:19:17.039893 systemd-tmpfiles[1154]: ACLs are not supported, ignoring. Sep 9 00:19:17.043353 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 9 00:19:17.048615 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 9 00:19:17.052604 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:19:17.054186 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 00:19:17.059764 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 00:19:17.060611 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 00:19:17.063146 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 9 00:19:17.066235 udevadm[1166]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 9 00:19:17.074577 kernel: loop1: detected capacity change from 0 to 114432 Sep 9 00:19:17.074787 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 00:19:17.097252 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 00:19:17.106845 kernel: loop2: detected capacity change from 0 to 207008 Sep 9 00:19:17.107741 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 00:19:17.122793 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Sep 9 00:19:17.122813 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Sep 9 00:19:17.126012 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:19:17.143789 kernel: loop3: detected capacity change from 0 to 114328 Sep 9 00:19:17.148579 kernel: loop4: detected capacity change from 0 to 114432 Sep 9 00:19:17.152698 kernel: loop5: detected capacity change from 0 to 207008 Sep 9 00:19:17.156611 (sd-merge)[1180]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 9 00:19:17.157243 (sd-merge)[1180]: Merged extensions into '/usr'. Sep 9 00:19:17.161138 systemd[1]: Reloading requested from client PID 1153 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 00:19:17.161151 systemd[1]: Reloading... Sep 9 00:19:17.205606 zram_generator::config[1205]: No configuration found. Sep 9 00:19:17.260349 ldconfig[1148]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 00:19:17.305855 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:19:17.340991 systemd[1]: Reloading finished in 179 ms. Sep 9 00:19:17.374488 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 00:19:17.377601 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 00:19:17.389723 systemd[1]: Starting ensure-sysext.service... Sep 9 00:19:17.391786 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 00:19:17.397002 systemd[1]: Reloading requested from client PID 1243 ('systemctl') (unit ensure-sysext.service)... Sep 9 00:19:17.397015 systemd[1]: Reloading... Sep 9 00:19:17.407801 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 00:19:17.408060 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 00:19:17.408710 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 00:19:17.408925 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Sep 9 00:19:17.408976 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Sep 9 00:19:17.411173 systemd-tmpfiles[1244]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 00:19:17.411186 systemd-tmpfiles[1244]: Skipping /boot Sep 9 00:19:17.417735 systemd-tmpfiles[1244]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 00:19:17.417749 systemd-tmpfiles[1244]: Skipping /boot Sep 9 00:19:17.446620 zram_generator::config[1271]: No configuration found. Sep 9 00:19:17.528926 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:19:17.564002 systemd[1]: Reloading finished in 166 ms. Sep 9 00:19:17.580301 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 00:19:17.589931 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:19:17.597414 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 9 00:19:17.599849 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 00:19:17.602022 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 00:19:17.605885 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 00:19:17.609973 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:19:17.616158 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 00:19:17.619969 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:19:17.626826 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:19:17.630848 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:19:17.633851 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:19:17.635825 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:19:17.638153 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 00:19:17.641603 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 00:19:17.641796 systemd-udevd[1314]: Using default interface naming scheme 'v255'. Sep 9 00:19:17.643366 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:19:17.643490 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:19:17.645276 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 00:19:17.647025 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:19:17.647145 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:19:17.649132 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:19:17.649250 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:19:17.651163 augenrules[1332]: No rules Sep 9 00:19:17.653339 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 9 00:19:17.659223 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:19:17.664630 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 00:19:17.674210 systemd[1]: Finished ensure-sysext.service. Sep 9 00:19:17.675272 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 00:19:17.677496 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:19:17.686057 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:19:17.689759 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 00:19:17.693847 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:19:17.695857 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:19:17.697821 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:19:17.700516 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 00:19:17.703112 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 9 00:19:17.706781 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 00:19:17.707759 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 00:19:17.708252 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:19:17.708396 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:19:17.709711 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:19:17.709834 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 00:19:17.720186 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 9 00:19:17.722634 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:19:17.722794 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:19:17.724542 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:19:17.729529 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:19:17.729727 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:19:17.730961 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 00:19:17.737186 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 00:19:17.757405 systemd-resolved[1311]: Positive Trust Anchors: Sep 9 00:19:17.757424 systemd-resolved[1311]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:19:17.757456 systemd-resolved[1311]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 00:19:17.761751 systemd-networkd[1373]: lo: Link UP Sep 9 00:19:17.761758 systemd-networkd[1373]: lo: Gained carrier Sep 9 00:19:17.762419 systemd-networkd[1373]: Enumeration completed Sep 9 00:19:17.762552 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 00:19:17.763251 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:19:17.763259 systemd-networkd[1373]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 00:19:17.763860 systemd-networkd[1373]: eth0: Link UP Sep 9 00:19:17.763867 systemd-networkd[1373]: eth0: Gained carrier Sep 9 00:19:17.763881 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:19:17.766249 systemd-resolved[1311]: Defaulting to hostname 'linux'. Sep 9 00:19:17.771745 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 00:19:17.773334 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 00:19:17.774976 systemd[1]: Reached target network.target - Network. Sep 9 00:19:17.775627 systemd-networkd[1373]: eth0: DHCPv4 address 10.0.0.53/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 00:19:17.775956 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:19:17.780604 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1362) Sep 9 00:19:17.787093 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 9 00:19:17.788037 systemd-timesyncd[1374]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 9 00:19:17.788089 systemd-timesyncd[1374]: Initial clock synchronization to Tue 2025-09-09 00:19:17.589702 UTC. Sep 9 00:19:17.794188 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 00:19:17.822051 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 00:19:17.833884 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 00:19:17.836447 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:19:17.843260 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 9 00:19:17.844838 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 00:19:17.856812 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 9 00:19:17.866590 lvm[1399]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 9 00:19:17.873117 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:19:17.905654 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 9 00:19:17.907007 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:19:17.908112 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 00:19:17.909224 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 00:19:17.910445 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 00:19:17.911875 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 00:19:17.913029 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 00:19:17.914239 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 00:19:17.915432 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 00:19:17.915467 systemd[1]: Reached target paths.target - Path Units. Sep 9 00:19:17.916381 systemd[1]: Reached target timers.target - Timer Units. Sep 9 00:19:17.917939 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 00:19:17.920226 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 00:19:17.927472 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 00:19:17.929573 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 9 00:19:17.931022 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 00:19:17.932169 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 00:19:17.933119 systemd[1]: Reached target basic.target - Basic System. Sep 9 00:19:17.934050 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 00:19:17.934080 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 00:19:17.934897 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 00:19:17.936794 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 00:19:17.937245 lvm[1406]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 9 00:19:17.939708 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 00:19:17.941792 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 00:19:17.942768 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 00:19:17.945751 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 00:19:17.949708 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 00:19:17.950387 jq[1409]: false Sep 9 00:19:17.953742 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 00:19:17.955691 dbus-daemon[1408]: [system] SELinux support is enabled Sep 9 00:19:17.959742 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 00:19:17.960099 extend-filesystems[1410]: Found loop3 Sep 9 00:19:17.961946 extend-filesystems[1410]: Found loop4 Sep 9 00:19:17.961946 extend-filesystems[1410]: Found loop5 Sep 9 00:19:17.961946 extend-filesystems[1410]: Found vda Sep 9 00:19:17.961946 extend-filesystems[1410]: Found vda1 Sep 9 00:19:17.961946 extend-filesystems[1410]: Found vda2 Sep 9 00:19:17.961946 extend-filesystems[1410]: Found vda3 Sep 9 00:19:17.961946 extend-filesystems[1410]: Found usr Sep 9 00:19:17.961946 extend-filesystems[1410]: Found vda4 Sep 9 00:19:17.961946 extend-filesystems[1410]: Found vda6 Sep 9 00:19:17.961946 extend-filesystems[1410]: Found vda7 Sep 9 00:19:17.961946 extend-filesystems[1410]: Found vda9 Sep 9 00:19:17.961946 extend-filesystems[1410]: Checking size of /dev/vda9 Sep 9 00:19:17.965982 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 00:19:17.968479 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 00:19:17.968875 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 00:19:17.970085 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 00:19:17.974001 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 00:19:17.977141 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 00:19:17.977764 extend-filesystems[1410]: Resized partition /dev/vda9 Sep 9 00:19:17.980513 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 9 00:19:17.984117 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1354) Sep 9 00:19:17.984180 extend-filesystems[1430]: resize2fs 1.47.1 (20-May-2024) Sep 9 00:19:17.985783 jq[1429]: true Sep 9 00:19:17.987583 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 9 00:19:17.988272 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 00:19:17.988427 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 00:19:17.988700 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 00:19:17.988837 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 00:19:17.992724 update_engine[1427]: I20250909 00:19:17.992499 1427 main.cc:92] Flatcar Update Engine starting Sep 9 00:19:17.993444 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 00:19:17.993611 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 00:19:17.994430 update_engine[1427]: I20250909 00:19:17.994328 1427 update_check_scheduler.cc:74] Next update check in 6m40s Sep 9 00:19:18.009834 jq[1434]: true Sep 9 00:19:18.010384 (ntainerd)[1444]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 00:19:18.011723 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 9 00:19:18.017153 tar[1433]: linux-arm64/LICENSE Sep 9 00:19:18.018877 systemd[1]: Started update-engine.service - Update Engine. Sep 9 00:19:18.021787 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 00:19:18.021824 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 00:19:18.022949 tar[1433]: linux-arm64/helm Sep 9 00:19:18.023058 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 00:19:18.023082 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 00:19:18.025110 extend-filesystems[1430]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 9 00:19:18.025110 extend-filesystems[1430]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 9 00:19:18.025110 extend-filesystems[1430]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 9 00:19:18.033401 extend-filesystems[1410]: Resized filesystem in /dev/vda9 Sep 9 00:19:18.027729 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 00:19:18.028943 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 00:19:18.029084 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 00:19:18.054520 systemd-logind[1422]: Watching system buttons on /dev/input/event0 (Power Button) Sep 9 00:19:18.054781 systemd-logind[1422]: New seat seat0. Sep 9 00:19:18.055515 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 00:19:18.062379 bash[1464]: Updated "/home/core/.ssh/authorized_keys" Sep 9 00:19:18.063737 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 00:19:18.068214 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 9 00:19:18.089857 locksmithd[1450]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 00:19:18.154443 containerd[1444]: time="2025-09-09T00:19:18.154366470Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 9 00:19:18.182634 containerd[1444]: time="2025-09-09T00:19:18.182598159Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:19:18.184598 containerd[1444]: time="2025-09-09T00:19:18.183948757Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.104-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:19:18.184598 containerd[1444]: time="2025-09-09T00:19:18.183986283Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 9 00:19:18.184598 containerd[1444]: time="2025-09-09T00:19:18.184000677Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 9 00:19:18.184598 containerd[1444]: time="2025-09-09T00:19:18.184152187Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 9 00:19:18.184598 containerd[1444]: time="2025-09-09T00:19:18.184168142Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 9 00:19:18.184598 containerd[1444]: time="2025-09-09T00:19:18.184216512Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:19:18.184598 containerd[1444]: time="2025-09-09T00:19:18.184233403Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:19:18.184598 containerd[1444]: time="2025-09-09T00:19:18.184370948Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:19:18.184598 containerd[1444]: time="2025-09-09T00:19:18.184384133Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 9 00:19:18.184598 containerd[1444]: time="2025-09-09T00:19:18.184395211Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:19:18.184598 containerd[1444]: time="2025-09-09T00:19:18.184403871Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 9 00:19:18.184832 containerd[1444]: time="2025-09-09T00:19:18.184467690Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:19:18.184832 containerd[1444]: time="2025-09-09T00:19:18.184675060Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:19:18.184832 containerd[1444]: time="2025-09-09T00:19:18.184766301Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:19:18.184832 containerd[1444]: time="2025-09-09T00:19:18.184778901Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 9 00:19:18.184900 containerd[1444]: time="2025-09-09T00:19:18.184848141Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 9 00:19:18.184900 containerd[1444]: time="2025-09-09T00:19:18.184882937Z" level=info msg="metadata content store policy set" policy=shared Sep 9 00:19:18.188217 containerd[1444]: time="2025-09-09T00:19:18.188189900Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 9 00:19:18.188264 containerd[1444]: time="2025-09-09T00:19:18.188229962Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 9 00:19:18.188264 containerd[1444]: time="2025-09-09T00:19:18.188244551Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 9 00:19:18.188264 containerd[1444]: time="2025-09-09T00:19:18.188257619Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 9 00:19:18.188340 containerd[1444]: time="2025-09-09T00:19:18.188269165Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 9 00:19:18.188404 containerd[1444]: time="2025-09-09T00:19:18.188384943Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 9 00:19:18.189115 containerd[1444]: time="2025-09-09T00:19:18.188849537Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 9 00:19:18.189115 containerd[1444]: time="2025-09-09T00:19:18.189004011Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 9 00:19:18.189115 containerd[1444]: time="2025-09-09T00:19:18.189028119Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 9 00:19:18.189115 containerd[1444]: time="2025-09-09T00:19:18.189047233Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 9 00:19:18.189115 containerd[1444]: time="2025-09-09T00:19:18.189067049Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 9 00:19:18.189489 containerd[1444]: time="2025-09-09T00:19:18.189089674Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 9 00:19:18.189489 containerd[1444]: time="2025-09-09T00:19:18.189308864Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 9 00:19:18.189489 containerd[1444]: time="2025-09-09T00:19:18.189335585Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 9 00:19:18.189489 containerd[1444]: time="2025-09-09T00:19:18.189356416Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 9 00:19:18.189489 containerd[1444]: time="2025-09-09T00:19:18.189375062Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 9 00:19:18.189489 containerd[1444]: time="2025-09-09T00:19:18.189392499Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 9 00:19:18.189489 containerd[1444]: time="2025-09-09T00:19:18.189407946Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 9 00:19:18.189489 containerd[1444]: time="2025-09-09T00:19:18.189434706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 9 00:19:18.189489 containerd[1444]: time="2025-09-09T00:19:18.189450466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 9 00:19:18.189489 containerd[1444]: time="2025-09-09T00:19:18.189497393Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 9 00:19:18.189721 containerd[1444]: time="2025-09-09T00:19:18.189517522Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 9 00:19:18.189721 containerd[1444]: time="2025-09-09T00:19:18.189534764Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 9 00:19:18.189721 containerd[1444]: time="2025-09-09T00:19:18.189552356Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 9 00:19:18.189721 containerd[1444]: time="2025-09-09T00:19:18.189590780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 9 00:19:18.189721 containerd[1444]: time="2025-09-09T00:19:18.189608100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 9 00:19:18.189721 containerd[1444]: time="2025-09-09T00:19:18.189624601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 9 00:19:18.189721 containerd[1444]: time="2025-09-09T00:19:18.189643286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 9 00:19:18.189721 containerd[1444]: time="2025-09-09T00:19:18.189656237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 9 00:19:18.189721 containerd[1444]: time="2025-09-09T00:19:18.189671957Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 9 00:19:18.189721 containerd[1444]: time="2025-09-09T00:19:18.189687054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 9 00:19:18.189721 containerd[1444]: time="2025-09-09T00:19:18.189706207Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 9 00:19:18.189915 containerd[1444]: time="2025-09-09T00:19:18.189732304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 9 00:19:18.189915 containerd[1444]: time="2025-09-09T00:19:18.189748414Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 9 00:19:18.189915 containerd[1444]: time="2025-09-09T00:19:18.189762769Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 9 00:19:18.189915 containerd[1444]: time="2025-09-09T00:19:18.189891810Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 9 00:19:18.189983 containerd[1444]: time="2025-09-09T00:19:18.189914903Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 9 00:19:18.190131 containerd[1444]: time="2025-09-09T00:19:18.189926723Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 9 00:19:18.190416 containerd[1444]: time="2025-09-09T00:19:18.190198067Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 9 00:19:18.190416 containerd[1444]: time="2025-09-09T00:19:18.190218040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 9 00:19:18.190416 containerd[1444]: time="2025-09-09T00:19:18.190243318Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 9 00:19:18.190416 containerd[1444]: time="2025-09-09T00:19:18.190255098Z" level=info msg="NRI interface is disabled by configuration." Sep 9 00:19:18.190416 containerd[1444]: time="2025-09-09T00:19:18.190266450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 9 00:19:18.190662 containerd[1444]: time="2025-09-09T00:19:18.190604148Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 9 00:19:18.190662 containerd[1444]: time="2025-09-09T00:19:18.190662427Z" level=info msg="Connect containerd service" Sep 9 00:19:18.190809 containerd[1444]: time="2025-09-09T00:19:18.190688095Z" level=info msg="using legacy CRI server" Sep 9 00:19:18.190809 containerd[1444]: time="2025-09-09T00:19:18.190695584Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 00:19:18.190845 containerd[1444]: time="2025-09-09T00:19:18.190810153Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 9 00:19:18.191622 containerd[1444]: time="2025-09-09T00:19:18.191594774Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 00:19:18.191808 containerd[1444]: time="2025-09-09T00:19:18.191779714Z" level=info msg="Start subscribing containerd event" Sep 9 00:19:18.191850 containerd[1444]: time="2025-09-09T00:19:18.191821454Z" level=info msg="Start recovering state" Sep 9 00:19:18.191894 containerd[1444]: time="2025-09-09T00:19:18.191875949Z" level=info msg="Start event monitor" Sep 9 00:19:18.191894 containerd[1444]: time="2025-09-09T00:19:18.191890304Z" level=info msg="Start snapshots syncer" Sep 9 00:19:18.191935 containerd[1444]: time="2025-09-09T00:19:18.191899003Z" level=info msg="Start cni network conf syncer for default" Sep 9 00:19:18.191935 containerd[1444]: time="2025-09-09T00:19:18.191905634Z" level=info msg="Start streaming server" Sep 9 00:19:18.192387 containerd[1444]: time="2025-09-09T00:19:18.192367224Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 00:19:18.192426 containerd[1444]: time="2025-09-09T00:19:18.192413762Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 00:19:18.194134 containerd[1444]: time="2025-09-09T00:19:18.192460260Z" level=info msg="containerd successfully booted in 0.038855s" Sep 9 00:19:18.192531 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 00:19:18.333588 sshd_keygen[1431]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 00:19:18.353102 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 00:19:18.368779 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 00:19:18.373714 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 00:19:18.373905 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 00:19:18.376865 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 00:19:18.381659 tar[1433]: linux-arm64/README.md Sep 9 00:19:18.387598 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 00:19:18.390622 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 00:19:18.393480 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 00:19:18.395432 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 9 00:19:18.396768 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 00:19:18.982716 systemd-networkd[1373]: eth0: Gained IPv6LL Sep 9 00:19:18.984786 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 00:19:18.986442 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 00:19:18.996806 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 9 00:19:18.998737 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:19:19.000520 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 00:19:19.013663 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 9 00:19:19.013813 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 9 00:19:19.015527 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 00:19:19.016247 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 00:19:19.539693 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:19:19.541197 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 00:19:19.543111 (kubelet)[1520]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:19:19.547491 systemd[1]: Startup finished in 528ms (kernel) + 5.737s (initrd) + 3.162s (userspace) = 9.428s. Sep 9 00:19:19.870939 kubelet[1520]: E0909 00:19:19.870832 1520 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:19:19.873602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:19:19.873737 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:19:23.664009 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 00:19:23.665053 systemd[1]: Started sshd@0-10.0.0.53:22-10.0.0.1:39968.service - OpenSSH per-connection server daemon (10.0.0.1:39968). Sep 9 00:19:23.707546 sshd[1534]: Accepted publickey for core from 10.0.0.1 port 39968 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:19:23.709219 sshd[1534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:19:23.716081 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 00:19:23.728785 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 00:19:23.731045 systemd-logind[1422]: New session 1 of user core. Sep 9 00:19:23.738310 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 00:19:23.740389 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 00:19:23.746307 (systemd)[1538]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:19:23.816774 systemd[1538]: Queued start job for default target default.target. Sep 9 00:19:23.825445 systemd[1538]: Created slice app.slice - User Application Slice. Sep 9 00:19:23.825475 systemd[1538]: Reached target paths.target - Paths. Sep 9 00:19:23.825488 systemd[1538]: Reached target timers.target - Timers. Sep 9 00:19:23.826725 systemd[1538]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 00:19:23.835914 systemd[1538]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 00:19:23.835974 systemd[1538]: Reached target sockets.target - Sockets. Sep 9 00:19:23.835985 systemd[1538]: Reached target basic.target - Basic System. Sep 9 00:19:23.836022 systemd[1538]: Reached target default.target - Main User Target. Sep 9 00:19:23.836049 systemd[1538]: Startup finished in 85ms. Sep 9 00:19:23.836320 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 00:19:23.837604 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 00:19:23.912867 systemd[1]: Started sshd@1-10.0.0.53:22-10.0.0.1:39984.service - OpenSSH per-connection server daemon (10.0.0.1:39984). Sep 9 00:19:23.942370 sshd[1549]: Accepted publickey for core from 10.0.0.1 port 39984 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:19:23.943598 sshd[1549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:19:23.946976 systemd-logind[1422]: New session 2 of user core. Sep 9 00:19:23.956703 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 00:19:24.008168 sshd[1549]: pam_unix(sshd:session): session closed for user core Sep 9 00:19:24.022880 systemd[1]: sshd@1-10.0.0.53:22-10.0.0.1:39984.service: Deactivated successfully. Sep 9 00:19:24.025746 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 00:19:24.026931 systemd-logind[1422]: Session 2 logged out. Waiting for processes to exit. Sep 9 00:19:24.028019 systemd[1]: Started sshd@2-10.0.0.53:22-10.0.0.1:39990.service - OpenSSH per-connection server daemon (10.0.0.1:39990). Sep 9 00:19:24.029033 systemd-logind[1422]: Removed session 2. Sep 9 00:19:24.060221 sshd[1556]: Accepted publickey for core from 10.0.0.1 port 39990 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:19:24.061482 sshd[1556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:19:24.064832 systemd-logind[1422]: New session 3 of user core. Sep 9 00:19:24.082697 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 00:19:24.129368 sshd[1556]: pam_unix(sshd:session): session closed for user core Sep 9 00:19:24.141866 systemd[1]: sshd@2-10.0.0.53:22-10.0.0.1:39990.service: Deactivated successfully. Sep 9 00:19:24.145005 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 00:19:24.146205 systemd-logind[1422]: Session 3 logged out. Waiting for processes to exit. Sep 9 00:19:24.161854 systemd[1]: Started sshd@3-10.0.0.53:22-10.0.0.1:39998.service - OpenSSH per-connection server daemon (10.0.0.1:39998). Sep 9 00:19:24.162675 systemd-logind[1422]: Removed session 3. Sep 9 00:19:24.190833 sshd[1563]: Accepted publickey for core from 10.0.0.1 port 39998 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:19:24.192137 sshd[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:19:24.196456 systemd-logind[1422]: New session 4 of user core. Sep 9 00:19:24.210812 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 00:19:24.262727 sshd[1563]: pam_unix(sshd:session): session closed for user core Sep 9 00:19:24.281950 systemd[1]: sshd@3-10.0.0.53:22-10.0.0.1:39998.service: Deactivated successfully. Sep 9 00:19:24.283911 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 00:19:24.285440 systemd-logind[1422]: Session 4 logged out. Waiting for processes to exit. Sep 9 00:19:24.298855 systemd[1]: Started sshd@4-10.0.0.53:22-10.0.0.1:40010.service - OpenSSH per-connection server daemon (10.0.0.1:40010). Sep 9 00:19:24.299719 systemd-logind[1422]: Removed session 4. Sep 9 00:19:24.328054 sshd[1570]: Accepted publickey for core from 10.0.0.1 port 40010 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:19:24.329405 sshd[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:19:24.332668 systemd-logind[1422]: New session 5 of user core. Sep 9 00:19:24.341702 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 00:19:24.396281 sudo[1573]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 00:19:24.396586 sudo[1573]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:19:24.414342 sudo[1573]: pam_unix(sudo:session): session closed for user root Sep 9 00:19:24.416828 sshd[1570]: pam_unix(sshd:session): session closed for user core Sep 9 00:19:24.429003 systemd[1]: sshd@4-10.0.0.53:22-10.0.0.1:40010.service: Deactivated successfully. Sep 9 00:19:24.430665 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 00:19:24.431974 systemd-logind[1422]: Session 5 logged out. Waiting for processes to exit. Sep 9 00:19:24.434588 systemd[1]: Started sshd@5-10.0.0.53:22-10.0.0.1:40024.service - OpenSSH per-connection server daemon (10.0.0.1:40024). Sep 9 00:19:24.435283 systemd-logind[1422]: Removed session 5. Sep 9 00:19:24.468454 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 40024 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:19:24.470000 sshd[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:19:24.474177 systemd-logind[1422]: New session 6 of user core. Sep 9 00:19:24.480688 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 00:19:24.532672 sudo[1582]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 00:19:24.532938 sudo[1582]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:19:24.535793 sudo[1582]: pam_unix(sudo:session): session closed for user root Sep 9 00:19:24.540284 sudo[1581]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 9 00:19:24.540540 sudo[1581]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:19:24.554085 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 9 00:19:24.555118 auditctl[1585]: No rules Sep 9 00:19:24.555606 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 00:19:24.555827 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 9 00:19:24.559848 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 9 00:19:24.580567 augenrules[1603]: No rules Sep 9 00:19:24.581763 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 9 00:19:24.582739 sudo[1581]: pam_unix(sudo:session): session closed for user root Sep 9 00:19:24.584303 sshd[1578]: pam_unix(sshd:session): session closed for user core Sep 9 00:19:24.590780 systemd[1]: sshd@5-10.0.0.53:22-10.0.0.1:40024.service: Deactivated successfully. Sep 9 00:19:24.592046 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 00:19:24.593266 systemd-logind[1422]: Session 6 logged out. Waiting for processes to exit. Sep 9 00:19:24.594307 systemd[1]: Started sshd@6-10.0.0.53:22-10.0.0.1:40038.service - OpenSSH per-connection server daemon (10.0.0.1:40038). Sep 9 00:19:24.595346 systemd-logind[1422]: Removed session 6. Sep 9 00:19:24.627294 sshd[1611]: Accepted publickey for core from 10.0.0.1 port 40038 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:19:24.628671 sshd[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:19:24.633689 systemd-logind[1422]: New session 7 of user core. Sep 9 00:19:24.643700 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 00:19:24.694064 sudo[1614]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 00:19:24.694338 sudo[1614]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:19:24.953931 (dockerd)[1633]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 00:19:24.954243 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 00:19:25.181286 dockerd[1633]: time="2025-09-09T00:19:25.179892525Z" level=info msg="Starting up" Sep 9 00:19:25.335655 dockerd[1633]: time="2025-09-09T00:19:25.335542732Z" level=info msg="Loading containers: start." Sep 9 00:19:25.412585 kernel: Initializing XFRM netlink socket Sep 9 00:19:25.483139 systemd-networkd[1373]: docker0: Link UP Sep 9 00:19:25.506807 dockerd[1633]: time="2025-09-09T00:19:25.506762052Z" level=info msg="Loading containers: done." Sep 9 00:19:25.519184 dockerd[1633]: time="2025-09-09T00:19:25.519135054Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 00:19:25.519307 dockerd[1633]: time="2025-09-09T00:19:25.519230912Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 9 00:19:25.519349 dockerd[1633]: time="2025-09-09T00:19:25.519330533Z" level=info msg="Daemon has completed initialization" Sep 9 00:19:25.550295 dockerd[1633]: time="2025-09-09T00:19:25.550087466Z" level=info msg="API listen on /run/docker.sock" Sep 9 00:19:25.550333 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 00:19:26.108326 containerd[1444]: time="2025-09-09T00:19:26.108286379Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 9 00:19:26.789452 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3522006292.mount: Deactivated successfully. Sep 9 00:19:28.142718 containerd[1444]: time="2025-09-09T00:19:28.142063948Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:28.143196 containerd[1444]: time="2025-09-09T00:19:28.143172315Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=26328359" Sep 9 00:19:28.144487 containerd[1444]: time="2025-09-09T00:19:28.144439598Z" level=info msg="ImageCreate event name:\"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:28.147263 containerd[1444]: time="2025-09-09T00:19:28.147227741Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:28.148312 containerd[1444]: time="2025-09-09T00:19:28.148285520Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"26325157\" in 2.039954344s" Sep 9 00:19:28.148344 containerd[1444]: time="2025-09-09T00:19:28.148320967Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\"" Sep 9 00:19:28.148991 containerd[1444]: time="2025-09-09T00:19:28.148969511Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 9 00:19:29.352911 containerd[1444]: time="2025-09-09T00:19:29.352868392Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:29.353925 containerd[1444]: time="2025-09-09T00:19:29.353895698Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=22528554" Sep 9 00:19:29.354818 containerd[1444]: time="2025-09-09T00:19:29.354409113Z" level=info msg="ImageCreate event name:\"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:29.357106 containerd[1444]: time="2025-09-09T00:19:29.357075137Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:29.358255 containerd[1444]: time="2025-09-09T00:19:29.358227525Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"24065666\" in 1.209224864s" Sep 9 00:19:29.358312 containerd[1444]: time="2025-09-09T00:19:29.358262842Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\"" Sep 9 00:19:29.358679 containerd[1444]: time="2025-09-09T00:19:29.358657736Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 9 00:19:30.125667 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 00:19:30.140736 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:19:30.242682 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:19:30.246865 (kubelet)[1850]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:19:30.295780 kubelet[1850]: E0909 00:19:30.295615 1850 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:19:30.299384 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:19:30.299529 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:19:30.546487 containerd[1444]: time="2025-09-09T00:19:30.546332004Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:30.547832 containerd[1444]: time="2025-09-09T00:19:30.547613416Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=17483529" Sep 9 00:19:30.548658 containerd[1444]: time="2025-09-09T00:19:30.548627966Z" level=info msg="ImageCreate event name:\"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:30.552118 containerd[1444]: time="2025-09-09T00:19:30.552090157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:30.553635 containerd[1444]: time="2025-09-09T00:19:30.553605396Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"19020659\" in 1.194849695s" Sep 9 00:19:30.553680 containerd[1444]: time="2025-09-09T00:19:30.553638589Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\"" Sep 9 00:19:30.554271 containerd[1444]: time="2025-09-09T00:19:30.554232490Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 9 00:19:31.574176 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3628614155.mount: Deactivated successfully. Sep 9 00:19:31.778172 containerd[1444]: time="2025-09-09T00:19:31.778127907Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:31.778860 containerd[1444]: time="2025-09-09T00:19:31.778550732Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=27376726" Sep 9 00:19:31.779388 containerd[1444]: time="2025-09-09T00:19:31.779336764Z" level=info msg="ImageCreate event name:\"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:31.781537 containerd[1444]: time="2025-09-09T00:19:31.781489759Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:31.782191 containerd[1444]: time="2025-09-09T00:19:31.782155638Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"27375743\" in 1.227887401s" Sep 9 00:19:31.782255 containerd[1444]: time="2025-09-09T00:19:31.782202233Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\"" Sep 9 00:19:31.782896 containerd[1444]: time="2025-09-09T00:19:31.782871218Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 9 00:19:32.347805 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1796516972.mount: Deactivated successfully. Sep 9 00:19:33.140844 containerd[1444]: time="2025-09-09T00:19:33.140773683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:33.141431 containerd[1444]: time="2025-09-09T00:19:33.141392328Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Sep 9 00:19:33.142274 containerd[1444]: time="2025-09-09T00:19:33.142239844Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:33.148171 containerd[1444]: time="2025-09-09T00:19:33.146318998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:33.148171 containerd[1444]: time="2025-09-09T00:19:33.147557602Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.364651646s" Sep 9 00:19:33.148171 containerd[1444]: time="2025-09-09T00:19:33.147604285Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 9 00:19:33.148515 containerd[1444]: time="2025-09-09T00:19:33.148493023Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 00:19:33.588833 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount490865663.mount: Deactivated successfully. Sep 9 00:19:33.594740 containerd[1444]: time="2025-09-09T00:19:33.593967001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:33.595689 containerd[1444]: time="2025-09-09T00:19:33.595656612Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 9 00:19:33.596733 containerd[1444]: time="2025-09-09T00:19:33.596696842Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:33.598703 containerd[1444]: time="2025-09-09T00:19:33.598669304Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:33.599436 containerd[1444]: time="2025-09-09T00:19:33.599405035Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 450.878803ms" Sep 9 00:19:33.599501 containerd[1444]: time="2025-09-09T00:19:33.599439798Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 9 00:19:33.600329 containerd[1444]: time="2025-09-09T00:19:33.600307447Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 9 00:19:34.099131 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount604998442.mount: Deactivated successfully. Sep 9 00:19:35.720650 containerd[1444]: time="2025-09-09T00:19:35.720589902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:35.721129 containerd[1444]: time="2025-09-09T00:19:35.721079844Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943167" Sep 9 00:19:35.722066 containerd[1444]: time="2025-09-09T00:19:35.722031561Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:35.725610 containerd[1444]: time="2025-09-09T00:19:35.725584921Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:35.726899 containerd[1444]: time="2025-09-09T00:19:35.726864556Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.126449982s" Sep 9 00:19:35.726935 containerd[1444]: time="2025-09-09T00:19:35.726899786Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Sep 9 00:19:39.953512 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:19:39.967813 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:19:39.990286 systemd[1]: Reloading requested from client PID 2008 ('systemctl') (unit session-7.scope)... Sep 9 00:19:39.990301 systemd[1]: Reloading... Sep 9 00:19:40.062800 zram_generator::config[2047]: No configuration found. Sep 9 00:19:40.144363 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:19:40.197483 systemd[1]: Reloading finished in 206 ms. Sep 9 00:19:40.243679 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:19:40.246014 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:19:40.247333 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 00:19:40.247667 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:19:40.249061 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:19:40.343280 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:19:40.347266 (kubelet)[2094]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 00:19:40.376358 kubelet[2094]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:19:40.376358 kubelet[2094]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 00:19:40.376358 kubelet[2094]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:19:40.376658 kubelet[2094]: I0909 00:19:40.376409 2094 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 00:19:41.196390 kubelet[2094]: I0909 00:19:41.196343 2094 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 9 00:19:41.196390 kubelet[2094]: I0909 00:19:41.196375 2094 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 00:19:41.196656 kubelet[2094]: I0909 00:19:41.196632 2094 server.go:954] "Client rotation is on, will bootstrap in background" Sep 9 00:19:41.213945 kubelet[2094]: E0909 00:19:41.213906 2094 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.53:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:19:41.216324 kubelet[2094]: I0909 00:19:41.216044 2094 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:19:41.221005 kubelet[2094]: E0909 00:19:41.220979 2094 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 9 00:19:41.221005 kubelet[2094]: I0909 00:19:41.221004 2094 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 9 00:19:41.223901 kubelet[2094]: I0909 00:19:41.223871 2094 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 00:19:41.225009 kubelet[2094]: I0909 00:19:41.224966 2094 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 00:19:41.225152 kubelet[2094]: I0909 00:19:41.225002 2094 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 00:19:41.225233 kubelet[2094]: I0909 00:19:41.225224 2094 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 00:19:41.225233 kubelet[2094]: I0909 00:19:41.225234 2094 container_manager_linux.go:304] "Creating device plugin manager" Sep 9 00:19:41.225417 kubelet[2094]: I0909 00:19:41.225403 2094 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:19:41.227680 kubelet[2094]: I0909 00:19:41.227658 2094 kubelet.go:446] "Attempting to sync node with API server" Sep 9 00:19:41.227718 kubelet[2094]: I0909 00:19:41.227683 2094 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 00:19:41.227718 kubelet[2094]: I0909 00:19:41.227699 2094 kubelet.go:352] "Adding apiserver pod source" Sep 9 00:19:41.227718 kubelet[2094]: I0909 00:19:41.227708 2094 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 00:19:41.230033 kubelet[2094]: W0909 00:19:41.229972 2094 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.53:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Sep 9 00:19:41.230070 kubelet[2094]: E0909 00:19:41.230032 2094 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.53:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:19:41.231305 kubelet[2094]: I0909 00:19:41.231259 2094 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 9 00:19:41.231408 kubelet[2094]: W0909 00:19:41.231334 2094 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Sep 9 00:19:41.231408 kubelet[2094]: E0909 00:19:41.231388 2094 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:19:41.232459 kubelet[2094]: I0909 00:19:41.232013 2094 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 00:19:41.232459 kubelet[2094]: W0909 00:19:41.232124 2094 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 00:19:41.234604 kubelet[2094]: I0909 00:19:41.233128 2094 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 00:19:41.234604 kubelet[2094]: I0909 00:19:41.233164 2094 server.go:1287] "Started kubelet" Sep 9 00:19:41.235715 kubelet[2094]: I0909 00:19:41.235655 2094 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 00:19:41.235907 kubelet[2094]: I0909 00:19:41.235888 2094 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 00:19:41.235983 kubelet[2094]: E0909 00:19:41.235687 2094 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.53:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.53:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863753c3fc75c8f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 00:19:41.233142927 +0000 UTC m=+0.883061334,LastTimestamp:2025-09-09 00:19:41.233142927 +0000 UTC m=+0.883061334,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 00:19:41.236039 kubelet[2094]: I0909 00:19:41.235998 2094 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 00:19:41.236697 kubelet[2094]: I0909 00:19:41.236670 2094 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 00:19:41.236813 kubelet[2094]: I0909 00:19:41.236787 2094 server.go:479] "Adding debug handlers to kubelet server" Sep 9 00:19:41.236922 kubelet[2094]: I0909 00:19:41.236906 2094 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 00:19:41.237047 kubelet[2094]: I0909 00:19:41.237033 2094 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 00:19:41.237255 kubelet[2094]: E0909 00:19:41.237229 2094 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:19:41.237613 kubelet[2094]: W0909 00:19:41.237559 2094 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Sep 9 00:19:41.237646 kubelet[2094]: E0909 00:19:41.237621 2094 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:19:41.237691 kubelet[2094]: I0909 00:19:41.237677 2094 reconciler.go:26] "Reconciler: start to sync state" Sep 9 00:19:41.237720 kubelet[2094]: I0909 00:19:41.237692 2094 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 00:19:41.238571 kubelet[2094]: E0909 00:19:41.238361 2094 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.53:6443: connect: connection refused" interval="200ms" Sep 9 00:19:41.238820 kubelet[2094]: I0909 00:19:41.238794 2094 factory.go:221] Registration of the systemd container factory successfully Sep 9 00:19:41.238959 kubelet[2094]: I0909 00:19:41.238920 2094 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 00:19:41.239101 kubelet[2094]: E0909 00:19:41.239080 2094 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 00:19:41.240316 kubelet[2094]: I0909 00:19:41.240295 2094 factory.go:221] Registration of the containerd container factory successfully Sep 9 00:19:41.249902 kubelet[2094]: I0909 00:19:41.249877 2094 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 00:19:41.249902 kubelet[2094]: I0909 00:19:41.249894 2094 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 00:19:41.249902 kubelet[2094]: I0909 00:19:41.249906 2094 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:19:41.337794 kubelet[2094]: E0909 00:19:41.337747 2094 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:19:41.354882 kubelet[2094]: I0909 00:19:41.354822 2094 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 00:19:41.355966 kubelet[2094]: I0909 00:19:41.355594 2094 policy_none.go:49] "None policy: Start" Sep 9 00:19:41.355966 kubelet[2094]: I0909 00:19:41.355624 2094 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 00:19:41.355966 kubelet[2094]: I0909 00:19:41.355636 2094 state_mem.go:35] "Initializing new in-memory state store" Sep 9 00:19:41.355966 kubelet[2094]: I0909 00:19:41.355888 2094 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 00:19:41.355966 kubelet[2094]: I0909 00:19:41.355914 2094 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 9 00:19:41.355966 kubelet[2094]: I0909 00:19:41.355936 2094 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 00:19:41.355966 kubelet[2094]: I0909 00:19:41.355943 2094 kubelet.go:2382] "Starting kubelet main sync loop" Sep 9 00:19:41.356168 kubelet[2094]: E0909 00:19:41.355982 2094 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 00:19:41.357249 kubelet[2094]: W0909 00:19:41.357199 2094 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Sep 9 00:19:41.357364 kubelet[2094]: E0909 00:19:41.357343 2094 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:19:41.361932 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 00:19:41.374918 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 00:19:41.377754 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 00:19:41.386266 kubelet[2094]: I0909 00:19:41.386240 2094 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 00:19:41.386474 kubelet[2094]: I0909 00:19:41.386401 2094 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 00:19:41.386474 kubelet[2094]: I0909 00:19:41.386414 2094 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 00:19:41.386971 kubelet[2094]: I0909 00:19:41.386804 2094 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 00:19:41.387264 kubelet[2094]: E0909 00:19:41.387245 2094 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 00:19:41.387308 kubelet[2094]: E0909 00:19:41.387282 2094 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 9 00:19:41.439150 kubelet[2094]: E0909 00:19:41.439103 2094 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.53:6443: connect: connection refused" interval="400ms" Sep 9 00:19:41.462401 systemd[1]: Created slice kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice - libcontainer container kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice. Sep 9 00:19:41.473260 kubelet[2094]: E0909 00:19:41.473142 2094 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:19:41.475372 systemd[1]: Created slice kubepods-burstable-pod5446e6f9db452a90d6c93b05b5d7159e.slice - libcontainer container kubepods-burstable-pod5446e6f9db452a90d6c93b05b5d7159e.slice. Sep 9 00:19:41.485593 kubelet[2094]: E0909 00:19:41.485442 2094 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:19:41.487733 systemd[1]: Created slice kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice - libcontainer container kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice. Sep 9 00:19:41.487963 kubelet[2094]: I0909 00:19:41.487786 2094 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:19:41.488198 kubelet[2094]: E0909 00:19:41.488177 2094 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.53:6443/api/v1/nodes\": dial tcp 10.0.0.53:6443: connect: connection refused" node="localhost" Sep 9 00:19:41.489175 kubelet[2094]: E0909 00:19:41.489136 2094 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:19:41.638445 kubelet[2094]: I0909 00:19:41.638404 2094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:19:41.638445 kubelet[2094]: I0909 00:19:41.638443 2094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:19:41.638539 kubelet[2094]: I0909 00:19:41.638462 2094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:19:41.638539 kubelet[2094]: I0909 00:19:41.638483 2094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 9 00:19:41.638539 kubelet[2094]: I0909 00:19:41.638498 2094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5446e6f9db452a90d6c93b05b5d7159e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5446e6f9db452a90d6c93b05b5d7159e\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:19:41.638539 kubelet[2094]: I0909 00:19:41.638518 2094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:19:41.638539 kubelet[2094]: I0909 00:19:41.638532 2094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5446e6f9db452a90d6c93b05b5d7159e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5446e6f9db452a90d6c93b05b5d7159e\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:19:41.638664 kubelet[2094]: I0909 00:19:41.638548 2094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5446e6f9db452a90d6c93b05b5d7159e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5446e6f9db452a90d6c93b05b5d7159e\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:19:41.638664 kubelet[2094]: I0909 00:19:41.638586 2094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:19:41.689395 kubelet[2094]: I0909 00:19:41.689364 2094 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:19:41.689813 kubelet[2094]: E0909 00:19:41.689768 2094 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.53:6443/api/v1/nodes\": dial tcp 10.0.0.53:6443: connect: connection refused" node="localhost" Sep 9 00:19:41.773779 kubelet[2094]: E0909 00:19:41.773682 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:41.774339 containerd[1444]: time="2025-09-09T00:19:41.774294334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,}" Sep 9 00:19:41.785948 kubelet[2094]: E0909 00:19:41.785914 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:41.786258 containerd[1444]: time="2025-09-09T00:19:41.786224969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5446e6f9db452a90d6c93b05b5d7159e,Namespace:kube-system,Attempt:0,}" Sep 9 00:19:41.789793 kubelet[2094]: E0909 00:19:41.789712 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:41.790047 containerd[1444]: time="2025-09-09T00:19:41.790007179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,}" Sep 9 00:19:41.839630 kubelet[2094]: E0909 00:19:41.839552 2094 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.53:6443: connect: connection refused" interval="800ms" Sep 9 00:19:42.091951 kubelet[2094]: I0909 00:19:42.091853 2094 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:19:42.092214 kubelet[2094]: E0909 00:19:42.092181 2094 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.53:6443/api/v1/nodes\": dial tcp 10.0.0.53:6443: connect: connection refused" node="localhost" Sep 9 00:19:42.173997 kubelet[2094]: W0909 00:19:42.173939 2094 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Sep 9 00:19:42.173997 kubelet[2094]: E0909 00:19:42.174000 2094 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:19:42.249907 kubelet[2094]: W0909 00:19:42.249860 2094 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Sep 9 00:19:42.250016 kubelet[2094]: E0909 00:19:42.249917 2094 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:19:42.256202 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3112227013.mount: Deactivated successfully. Sep 9 00:19:42.260203 containerd[1444]: time="2025-09-09T00:19:42.260163877Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:19:42.261476 containerd[1444]: time="2025-09-09T00:19:42.261444588Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 9 00:19:42.262029 containerd[1444]: time="2025-09-09T00:19:42.261989919Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:19:42.263281 containerd[1444]: time="2025-09-09T00:19:42.263218323Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:19:42.263855 containerd[1444]: time="2025-09-09T00:19:42.263823674Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Sep 9 00:19:42.265079 containerd[1444]: time="2025-09-09T00:19:42.264682170Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:19:42.265079 containerd[1444]: time="2025-09-09T00:19:42.264949541Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 9 00:19:42.271554 containerd[1444]: time="2025-09-09T00:19:42.271521607Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:19:42.272554 containerd[1444]: time="2025-09-09T00:19:42.272528633Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 486.247207ms" Sep 9 00:19:42.274572 containerd[1444]: time="2025-09-09T00:19:42.274542446Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 484.476654ms" Sep 9 00:19:42.275775 containerd[1444]: time="2025-09-09T00:19:42.275751789Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 501.377426ms" Sep 9 00:19:42.307457 kubelet[2094]: W0909 00:19:42.307366 2094 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.53:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Sep 9 00:19:42.307457 kubelet[2094]: E0909 00:19:42.307428 2094 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.53:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:19:42.374134 containerd[1444]: time="2025-09-09T00:19:42.373942609Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:19:42.374258 containerd[1444]: time="2025-09-09T00:19:42.373950881Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:19:42.374258 containerd[1444]: time="2025-09-09T00:19:42.374091099Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:19:42.374258 containerd[1444]: time="2025-09-09T00:19:42.374111679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:19:42.374258 containerd[1444]: time="2025-09-09T00:19:42.374052378Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:19:42.374258 containerd[1444]: time="2025-09-09T00:19:42.374081709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:19:42.374733 containerd[1444]: time="2025-09-09T00:19:42.374673673Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:19:42.374733 containerd[1444]: time="2025-09-09T00:19:42.374712354Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:19:42.374793 containerd[1444]: time="2025-09-09T00:19:42.374727179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:19:42.374914 containerd[1444]: time="2025-09-09T00:19:42.374889456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:19:42.375027 containerd[1444]: time="2025-09-09T00:19:42.374999745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:19:42.375296 containerd[1444]: time="2025-09-09T00:19:42.375199104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:19:42.393725 systemd[1]: Started cri-containerd-9c2c1731dba66d568fba0350c4c80822b3174d67445ae1bc1852872063697b16.scope - libcontainer container 9c2c1731dba66d568fba0350c4c80822b3174d67445ae1bc1852872063697b16. Sep 9 00:19:42.395113 systemd[1]: Started cri-containerd-e1ecbf95c73a400ab6cc8d38ad8f8da5878723aa756d3f4724b096a4b64ac494.scope - libcontainer container e1ecbf95c73a400ab6cc8d38ad8f8da5878723aa756d3f4724b096a4b64ac494. Sep 9 00:19:42.398505 systemd[1]: Started cri-containerd-fa19f69a2ef22fe36bc83ee7bd263909bb0a6c45db779873ca22729c61cc7f98.scope - libcontainer container fa19f69a2ef22fe36bc83ee7bd263909bb0a6c45db779873ca22729c61cc7f98. Sep 9 00:19:42.427022 containerd[1444]: time="2025-09-09T00:19:42.426884048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5446e6f9db452a90d6c93b05b5d7159e,Namespace:kube-system,Attempt:0,} returns sandbox id \"e1ecbf95c73a400ab6cc8d38ad8f8da5878723aa756d3f4724b096a4b64ac494\"" Sep 9 00:19:42.429439 kubelet[2094]: E0909 00:19:42.429356 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:42.430235 containerd[1444]: time="2025-09-09T00:19:42.430033159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c2c1731dba66d568fba0350c4c80822b3174d67445ae1bc1852872063697b16\"" Sep 9 00:19:42.431313 containerd[1444]: time="2025-09-09T00:19:42.431282182Z" level=info msg="CreateContainer within sandbox \"e1ecbf95c73a400ab6cc8d38ad8f8da5878723aa756d3f4724b096a4b64ac494\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 00:19:42.431780 kubelet[2094]: E0909 00:19:42.431760 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:42.433477 containerd[1444]: time="2025-09-09T00:19:42.433410999Z" level=info msg="CreateContainer within sandbox \"9c2c1731dba66d568fba0350c4c80822b3174d67445ae1bc1852872063697b16\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 00:19:42.440632 containerd[1444]: time="2025-09-09T00:19:42.440603840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa19f69a2ef22fe36bc83ee7bd263909bb0a6c45db779873ca22729c61cc7f98\"" Sep 9 00:19:42.441164 kubelet[2094]: E0909 00:19:42.441143 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:42.442786 containerd[1444]: time="2025-09-09T00:19:42.442757633Z" level=info msg="CreateContainer within sandbox \"fa19f69a2ef22fe36bc83ee7bd263909bb0a6c45db779873ca22729c61cc7f98\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 00:19:42.446020 containerd[1444]: time="2025-09-09T00:19:42.445984146Z" level=info msg="CreateContainer within sandbox \"e1ecbf95c73a400ab6cc8d38ad8f8da5878723aa756d3f4724b096a4b64ac494\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5a77e8c3aee5d8a84676af5f7e8d7e1348e2adc8bcbe2026ee34a8face0d0462\"" Sep 9 00:19:42.446487 containerd[1444]: time="2025-09-09T00:19:42.446457469Z" level=info msg="StartContainer for \"5a77e8c3aee5d8a84676af5f7e8d7e1348e2adc8bcbe2026ee34a8face0d0462\"" Sep 9 00:19:42.451674 containerd[1444]: time="2025-09-09T00:19:42.451637136Z" level=info msg="CreateContainer within sandbox \"9c2c1731dba66d568fba0350c4c80822b3174d67445ae1bc1852872063697b16\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"97488c387b88c829e8cd859e40cbbd28b9734d1bab87bde4498de7535ef56da1\"" Sep 9 00:19:42.452096 containerd[1444]: time="2025-09-09T00:19:42.452062268Z" level=info msg="StartContainer for \"97488c387b88c829e8cd859e40cbbd28b9734d1bab87bde4498de7535ef56da1\"" Sep 9 00:19:42.459944 containerd[1444]: time="2025-09-09T00:19:42.459907693Z" level=info msg="CreateContainer within sandbox \"fa19f69a2ef22fe36bc83ee7bd263909bb0a6c45db779873ca22729c61cc7f98\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e46622ccfc4ecefc5940524ea9e0a3efd3ed5d62c9e2311288fa9beed2a87c70\"" Sep 9 00:19:42.461525 containerd[1444]: time="2025-09-09T00:19:42.461500570Z" level=info msg="StartContainer for \"e46622ccfc4ecefc5940524ea9e0a3efd3ed5d62c9e2311288fa9beed2a87c70\"" Sep 9 00:19:42.471717 systemd[1]: Started cri-containerd-5a77e8c3aee5d8a84676af5f7e8d7e1348e2adc8bcbe2026ee34a8face0d0462.scope - libcontainer container 5a77e8c3aee5d8a84676af5f7e8d7e1348e2adc8bcbe2026ee34a8face0d0462. Sep 9 00:19:42.483727 systemd[1]: Started cri-containerd-97488c387b88c829e8cd859e40cbbd28b9734d1bab87bde4498de7535ef56da1.scope - libcontainer container 97488c387b88c829e8cd859e40cbbd28b9734d1bab87bde4498de7535ef56da1. Sep 9 00:19:42.487374 systemd[1]: Started cri-containerd-e46622ccfc4ecefc5940524ea9e0a3efd3ed5d62c9e2311288fa9beed2a87c70.scope - libcontainer container e46622ccfc4ecefc5940524ea9e0a3efd3ed5d62c9e2311288fa9beed2a87c70. Sep 9 00:19:42.515676 kubelet[2094]: W0909 00:19:42.515599 2094 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Sep 9 00:19:42.515676 kubelet[2094]: E0909 00:19:42.515641 2094 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:19:42.516118 containerd[1444]: time="2025-09-09T00:19:42.516075325Z" level=info msg="StartContainer for \"5a77e8c3aee5d8a84676af5f7e8d7e1348e2adc8bcbe2026ee34a8face0d0462\" returns successfully" Sep 9 00:19:42.526869 containerd[1444]: time="2025-09-09T00:19:42.526819392Z" level=info msg="StartContainer for \"97488c387b88c829e8cd859e40cbbd28b9734d1bab87bde4498de7535ef56da1\" returns successfully" Sep 9 00:19:42.526987 containerd[1444]: time="2025-09-09T00:19:42.526900510Z" level=info msg="StartContainer for \"e46622ccfc4ecefc5940524ea9e0a3efd3ed5d62c9e2311288fa9beed2a87c70\" returns successfully" Sep 9 00:19:42.893891 kubelet[2094]: I0909 00:19:42.893849 2094 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:19:43.364189 kubelet[2094]: E0909 00:19:43.364099 2094 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:19:43.364270 kubelet[2094]: E0909 00:19:43.364226 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:43.366123 kubelet[2094]: E0909 00:19:43.366104 2094 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:19:43.368607 kubelet[2094]: E0909 00:19:43.366202 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:43.368607 kubelet[2094]: E0909 00:19:43.367478 2094 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:19:43.368607 kubelet[2094]: E0909 00:19:43.367594 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:44.369113 kubelet[2094]: E0909 00:19:44.369082 2094 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:19:44.369434 kubelet[2094]: E0909 00:19:44.369212 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:44.369434 kubelet[2094]: E0909 00:19:44.369406 2094 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:19:44.369489 kubelet[2094]: E0909 00:19:44.369473 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:44.471214 kubelet[2094]: E0909 00:19:44.471184 2094 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 9 00:19:44.562627 kubelet[2094]: I0909 00:19:44.562560 2094 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 00:19:44.562627 kubelet[2094]: E0909 00:19:44.562607 2094 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 9 00:19:44.639077 kubelet[2094]: I0909 00:19:44.638608 2094 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 00:19:44.643863 kubelet[2094]: E0909 00:19:44.643834 2094 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 9 00:19:44.643863 kubelet[2094]: I0909 00:19:44.643860 2094 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:19:44.645442 kubelet[2094]: E0909 00:19:44.645281 2094 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 9 00:19:44.645442 kubelet[2094]: I0909 00:19:44.645301 2094 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:19:44.646958 kubelet[2094]: E0909 00:19:44.646928 2094 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:19:45.232746 kubelet[2094]: I0909 00:19:45.232710 2094 apiserver.go:52] "Watching apiserver" Sep 9 00:19:45.238191 kubelet[2094]: I0909 00:19:45.238154 2094 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 00:19:45.369506 kubelet[2094]: I0909 00:19:45.369467 2094 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:19:45.371340 kubelet[2094]: E0909 00:19:45.371318 2094 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 9 00:19:45.371488 kubelet[2094]: E0909 00:19:45.371472 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:46.284721 systemd[1]: Reloading requested from client PID 2368 ('systemctl') (unit session-7.scope)... Sep 9 00:19:46.284735 systemd[1]: Reloading... Sep 9 00:19:46.347301 zram_generator::config[2410]: No configuration found. Sep 9 00:19:46.428683 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:19:46.494766 systemd[1]: Reloading finished in 209 ms. Sep 9 00:19:46.530305 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:19:46.546477 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 00:19:46.546729 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:19:46.546788 systemd[1]: kubelet.service: Consumed 1.215s CPU time, 131.6M memory peak, 0B memory swap peak. Sep 9 00:19:46.563154 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:19:46.665586 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:19:46.671778 (kubelet)[2449]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 00:19:46.706736 kubelet[2449]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:19:46.706736 kubelet[2449]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 00:19:46.706736 kubelet[2449]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:19:46.707089 kubelet[2449]: I0909 00:19:46.706768 2449 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 00:19:46.714858 kubelet[2449]: I0909 00:19:46.714820 2449 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 9 00:19:46.715665 kubelet[2449]: I0909 00:19:46.714983 2449 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 00:19:46.715665 kubelet[2449]: I0909 00:19:46.715230 2449 server.go:954] "Client rotation is on, will bootstrap in background" Sep 9 00:19:46.716549 kubelet[2449]: I0909 00:19:46.716517 2449 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 9 00:19:46.718784 kubelet[2449]: I0909 00:19:46.718758 2449 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:19:46.721540 kubelet[2449]: E0909 00:19:46.721499 2449 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 9 00:19:46.721603 kubelet[2449]: I0909 00:19:46.721588 2449 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 9 00:19:46.724331 kubelet[2449]: I0909 00:19:46.724303 2449 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 00:19:46.724617 kubelet[2449]: I0909 00:19:46.724578 2449 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 00:19:46.724821 kubelet[2449]: I0909 00:19:46.724605 2449 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 00:19:46.724896 kubelet[2449]: I0909 00:19:46.724827 2449 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 00:19:46.724896 kubelet[2449]: I0909 00:19:46.724836 2449 container_manager_linux.go:304] "Creating device plugin manager" Sep 9 00:19:46.724896 kubelet[2449]: I0909 00:19:46.724879 2449 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:19:46.725059 kubelet[2449]: I0909 00:19:46.725024 2449 kubelet.go:446] "Attempting to sync node with API server" Sep 9 00:19:46.725059 kubelet[2449]: I0909 00:19:46.725042 2449 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 00:19:46.725111 kubelet[2449]: I0909 00:19:46.725068 2449 kubelet.go:352] "Adding apiserver pod source" Sep 9 00:19:46.725111 kubelet[2449]: I0909 00:19:46.725079 2449 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 00:19:46.730586 kubelet[2449]: I0909 00:19:46.726296 2449 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 9 00:19:46.730586 kubelet[2449]: I0909 00:19:46.726813 2449 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 00:19:46.730586 kubelet[2449]: I0909 00:19:46.727241 2449 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 00:19:46.730586 kubelet[2449]: I0909 00:19:46.727267 2449 server.go:1287] "Started kubelet" Sep 9 00:19:46.730586 kubelet[2449]: I0909 00:19:46.728634 2449 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 00:19:46.730586 kubelet[2449]: I0909 00:19:46.728945 2449 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 00:19:46.730586 kubelet[2449]: I0909 00:19:46.729008 2449 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 00:19:46.730586 kubelet[2449]: I0909 00:19:46.729189 2449 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 00:19:46.730586 kubelet[2449]: I0909 00:19:46.730405 2449 server.go:479] "Adding debug handlers to kubelet server" Sep 9 00:19:46.732806 kubelet[2449]: I0909 00:19:46.732781 2449 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 00:19:46.737133 kubelet[2449]: E0909 00:19:46.737101 2449 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 00:19:46.740819 kubelet[2449]: E0909 00:19:46.740713 2449 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:19:46.740819 kubelet[2449]: I0909 00:19:46.740757 2449 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 00:19:46.741159 kubelet[2449]: I0909 00:19:46.741137 2449 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 00:19:46.741933 kubelet[2449]: I0909 00:19:46.741358 2449 reconciler.go:26] "Reconciler: start to sync state" Sep 9 00:19:46.748833 kubelet[2449]: I0909 00:19:46.748796 2449 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 00:19:46.750651 kubelet[2449]: I0909 00:19:46.750632 2449 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 00:19:46.750761 kubelet[2449]: I0909 00:19:46.750750 2449 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 9 00:19:46.750863 kubelet[2449]: I0909 00:19:46.750838 2449 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 00:19:46.750914 kubelet[2449]: I0909 00:19:46.750906 2449 kubelet.go:2382] "Starting kubelet main sync loop" Sep 9 00:19:46.751009 kubelet[2449]: E0909 00:19:46.750993 2449 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 00:19:46.751715 kubelet[2449]: I0909 00:19:46.751690 2449 factory.go:221] Registration of the systemd container factory successfully Sep 9 00:19:46.751838 kubelet[2449]: I0909 00:19:46.751819 2449 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 00:19:46.756524 kubelet[2449]: I0909 00:19:46.756489 2449 factory.go:221] Registration of the containerd container factory successfully Sep 9 00:19:46.785834 kubelet[2449]: I0909 00:19:46.785808 2449 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 00:19:46.786816 kubelet[2449]: I0909 00:19:46.785968 2449 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 00:19:46.786816 kubelet[2449]: I0909 00:19:46.785993 2449 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:19:46.786816 kubelet[2449]: I0909 00:19:46.786140 2449 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 00:19:46.786816 kubelet[2449]: I0909 00:19:46.786151 2449 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 00:19:46.786816 kubelet[2449]: I0909 00:19:46.786168 2449 policy_none.go:49] "None policy: Start" Sep 9 00:19:46.786816 kubelet[2449]: I0909 00:19:46.786177 2449 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 00:19:46.786816 kubelet[2449]: I0909 00:19:46.786186 2449 state_mem.go:35] "Initializing new in-memory state store" Sep 9 00:19:46.786816 kubelet[2449]: I0909 00:19:46.786278 2449 state_mem.go:75] "Updated machine memory state" Sep 9 00:19:46.790392 kubelet[2449]: I0909 00:19:46.790352 2449 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 00:19:46.790542 kubelet[2449]: I0909 00:19:46.790516 2449 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 00:19:46.790608 kubelet[2449]: I0909 00:19:46.790541 2449 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 00:19:46.791461 kubelet[2449]: E0909 00:19:46.791418 2449 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 00:19:46.792179 kubelet[2449]: I0909 00:19:46.791718 2449 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 00:19:46.852009 kubelet[2449]: I0909 00:19:46.851902 2449 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:19:46.852009 kubelet[2449]: I0909 00:19:46.851978 2449 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:19:46.852131 kubelet[2449]: I0909 00:19:46.852014 2449 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 00:19:46.894784 kubelet[2449]: I0909 00:19:46.894758 2449 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:19:46.902396 kubelet[2449]: I0909 00:19:46.902367 2449 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 9 00:19:46.902787 kubelet[2449]: I0909 00:19:46.902602 2449 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 00:19:46.943036 kubelet[2449]: I0909 00:19:46.943001 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:19:46.943327 kubelet[2449]: I0909 00:19:46.943256 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:19:46.943539 kubelet[2449]: I0909 00:19:46.943292 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5446e6f9db452a90d6c93b05b5d7159e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5446e6f9db452a90d6c93b05b5d7159e\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:19:46.943539 kubelet[2449]: I0909 00:19:46.943394 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5446e6f9db452a90d6c93b05b5d7159e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5446e6f9db452a90d6c93b05b5d7159e\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:19:46.943539 kubelet[2449]: I0909 00:19:46.943412 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:19:46.943539 kubelet[2449]: I0909 00:19:46.943444 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:19:46.943539 kubelet[2449]: I0909 00:19:46.943463 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 9 00:19:46.943707 kubelet[2449]: I0909 00:19:46.943479 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5446e6f9db452a90d6c93b05b5d7159e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5446e6f9db452a90d6c93b05b5d7159e\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:19:46.943707 kubelet[2449]: I0909 00:19:46.943495 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:19:47.157417 kubelet[2449]: E0909 00:19:47.157310 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:47.157549 kubelet[2449]: E0909 00:19:47.157532 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:47.157708 kubelet[2449]: E0909 00:19:47.157677 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:47.282110 sudo[2489]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 9 00:19:47.282855 sudo[2489]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 9 00:19:47.708004 sudo[2489]: pam_unix(sudo:session): session closed for user root Sep 9 00:19:47.726760 kubelet[2449]: I0909 00:19:47.726634 2449 apiserver.go:52] "Watching apiserver" Sep 9 00:19:47.741806 kubelet[2449]: I0909 00:19:47.741770 2449 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 00:19:47.765294 kubelet[2449]: I0909 00:19:47.765263 2449 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 00:19:47.765660 kubelet[2449]: I0909 00:19:47.765638 2449 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:19:47.765734 kubelet[2449]: E0909 00:19:47.765638 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:47.775799 kubelet[2449]: E0909 00:19:47.774626 2449 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 00:19:47.775799 kubelet[2449]: E0909 00:19:47.774698 2449 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 9 00:19:47.775799 kubelet[2449]: E0909 00:19:47.774796 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:47.775799 kubelet[2449]: E0909 00:19:47.774828 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:47.799390 kubelet[2449]: I0909 00:19:47.799327 2449 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.799207448 podStartE2EDuration="1.799207448s" podCreationTimestamp="2025-09-09 00:19:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:19:47.789681043 +0000 UTC m=+1.114536033" watchObservedRunningTime="2025-09-09 00:19:47.799207448 +0000 UTC m=+1.124062438" Sep 9 00:19:47.805028 kubelet[2449]: I0909 00:19:47.804939 2449 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.8049275759999999 podStartE2EDuration="1.804927576s" podCreationTimestamp="2025-09-09 00:19:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:19:47.799625112 +0000 UTC m=+1.124480102" watchObservedRunningTime="2025-09-09 00:19:47.804927576 +0000 UTC m=+1.129782566" Sep 9 00:19:47.813134 kubelet[2449]: I0909 00:19:47.812690 2449 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.812680976 podStartE2EDuration="1.812680976s" podCreationTimestamp="2025-09-09 00:19:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:19:47.805297585 +0000 UTC m=+1.130152575" watchObservedRunningTime="2025-09-09 00:19:47.812680976 +0000 UTC m=+1.137535966" Sep 9 00:19:48.766322 kubelet[2449]: E0909 00:19:48.766276 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:48.766675 kubelet[2449]: E0909 00:19:48.766363 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:49.440554 sudo[1614]: pam_unix(sudo:session): session closed for user root Sep 9 00:19:49.442002 sshd[1611]: pam_unix(sshd:session): session closed for user core Sep 9 00:19:49.445164 systemd[1]: sshd@6-10.0.0.53:22-10.0.0.1:40038.service: Deactivated successfully. Sep 9 00:19:49.447817 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 00:19:49.447973 systemd[1]: session-7.scope: Consumed 6.452s CPU time, 152.0M memory peak, 0B memory swap peak. Sep 9 00:19:49.448393 systemd-logind[1422]: Session 7 logged out. Waiting for processes to exit. Sep 9 00:19:49.449211 systemd-logind[1422]: Removed session 7. Sep 9 00:19:50.891857 kubelet[2449]: E0909 00:19:50.891812 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:51.895219 kubelet[2449]: E0909 00:19:51.895182 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:52.427106 kubelet[2449]: E0909 00:19:52.427060 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:52.775219 kubelet[2449]: E0909 00:19:52.773814 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:53.446872 kubelet[2449]: I0909 00:19:53.446706 2449 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 00:19:53.447205 kubelet[2449]: I0909 00:19:53.447186 2449 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 00:19:53.447232 containerd[1444]: time="2025-09-09T00:19:53.447018288Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 00:19:54.448668 systemd[1]: Created slice kubepods-besteffort-pod30b64cf0_3faf_4134_9440_806366ba7fb9.slice - libcontainer container kubepods-besteffort-pod30b64cf0_3faf_4134_9440_806366ba7fb9.slice. Sep 9 00:19:54.472762 systemd[1]: Created slice kubepods-burstable-pod102d3534_5684_4468_9998_dfb590525263.slice - libcontainer container kubepods-burstable-pod102d3534_5684_4468_9998_dfb590525263.slice. Sep 9 00:19:54.495295 kubelet[2449]: I0909 00:19:54.494939 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/102d3534-5684-4468-9998-dfb590525263-cilium-cgroup\") pod \"cilium-h6w56\" (UID: \"102d3534-5684-4468-9998-dfb590525263\") " pod="kube-system/cilium-h6w56" Sep 9 00:19:54.495295 kubelet[2449]: I0909 00:19:54.494979 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/102d3534-5684-4468-9998-dfb590525263-xtables-lock\") pod \"cilium-h6w56\" (UID: \"102d3534-5684-4468-9998-dfb590525263\") " pod="kube-system/cilium-h6w56" Sep 9 00:19:54.495295 kubelet[2449]: I0909 00:19:54.495001 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/102d3534-5684-4468-9998-dfb590525263-bpf-maps\") pod \"cilium-h6w56\" (UID: \"102d3534-5684-4468-9998-dfb590525263\") " pod="kube-system/cilium-h6w56" Sep 9 00:19:54.495295 kubelet[2449]: I0909 00:19:54.495016 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/102d3534-5684-4468-9998-dfb590525263-lib-modules\") pod \"cilium-h6w56\" (UID: \"102d3534-5684-4468-9998-dfb590525263\") " pod="kube-system/cilium-h6w56" Sep 9 00:19:54.495295 kubelet[2449]: I0909 00:19:54.495031 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/102d3534-5684-4468-9998-dfb590525263-host-proc-sys-kernel\") pod \"cilium-h6w56\" (UID: \"102d3534-5684-4468-9998-dfb590525263\") " pod="kube-system/cilium-h6w56" Sep 9 00:19:54.495295 kubelet[2449]: I0909 00:19:54.495047 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/102d3534-5684-4468-9998-dfb590525263-hubble-tls\") pod \"cilium-h6w56\" (UID: \"102d3534-5684-4468-9998-dfb590525263\") " pod="kube-system/cilium-h6w56" Sep 9 00:19:54.495758 kubelet[2449]: I0909 00:19:54.495060 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/30b64cf0-3faf-4134-9440-806366ba7fb9-kube-proxy\") pod \"kube-proxy-l7f6k\" (UID: \"30b64cf0-3faf-4134-9440-806366ba7fb9\") " pod="kube-system/kube-proxy-l7f6k" Sep 9 00:19:54.495758 kubelet[2449]: I0909 00:19:54.495074 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/102d3534-5684-4468-9998-dfb590525263-etc-cni-netd\") pod \"cilium-h6w56\" (UID: \"102d3534-5684-4468-9998-dfb590525263\") " pod="kube-system/cilium-h6w56" Sep 9 00:19:54.495758 kubelet[2449]: I0909 00:19:54.495088 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htlxc\" (UniqueName: \"kubernetes.io/projected/30b64cf0-3faf-4134-9440-806366ba7fb9-kube-api-access-htlxc\") pod \"kube-proxy-l7f6k\" (UID: \"30b64cf0-3faf-4134-9440-806366ba7fb9\") " pod="kube-system/kube-proxy-l7f6k" Sep 9 00:19:54.495758 kubelet[2449]: I0909 00:19:54.495103 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc245\" (UniqueName: \"kubernetes.io/projected/102d3534-5684-4468-9998-dfb590525263-kube-api-access-bc245\") pod \"cilium-h6w56\" (UID: \"102d3534-5684-4468-9998-dfb590525263\") " pod="kube-system/cilium-h6w56" Sep 9 00:19:54.495758 kubelet[2449]: I0909 00:19:54.495117 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/30b64cf0-3faf-4134-9440-806366ba7fb9-lib-modules\") pod \"kube-proxy-l7f6k\" (UID: \"30b64cf0-3faf-4134-9440-806366ba7fb9\") " pod="kube-system/kube-proxy-l7f6k" Sep 9 00:19:54.495862 kubelet[2449]: I0909 00:19:54.495131 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/102d3534-5684-4468-9998-dfb590525263-cilium-run\") pod \"cilium-h6w56\" (UID: \"102d3534-5684-4468-9998-dfb590525263\") " pod="kube-system/cilium-h6w56" Sep 9 00:19:54.495862 kubelet[2449]: I0909 00:19:54.495145 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/102d3534-5684-4468-9998-dfb590525263-cni-path\") pod \"cilium-h6w56\" (UID: \"102d3534-5684-4468-9998-dfb590525263\") " pod="kube-system/cilium-h6w56" Sep 9 00:19:54.495862 kubelet[2449]: I0909 00:19:54.495161 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/102d3534-5684-4468-9998-dfb590525263-clustermesh-secrets\") pod \"cilium-h6w56\" (UID: \"102d3534-5684-4468-9998-dfb590525263\") " pod="kube-system/cilium-h6w56" Sep 9 00:19:54.495862 kubelet[2449]: I0909 00:19:54.495175 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/102d3534-5684-4468-9998-dfb590525263-host-proc-sys-net\") pod \"cilium-h6w56\" (UID: \"102d3534-5684-4468-9998-dfb590525263\") " pod="kube-system/cilium-h6w56" Sep 9 00:19:54.495862 kubelet[2449]: I0909 00:19:54.495190 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/30b64cf0-3faf-4134-9440-806366ba7fb9-xtables-lock\") pod \"kube-proxy-l7f6k\" (UID: \"30b64cf0-3faf-4134-9440-806366ba7fb9\") " pod="kube-system/kube-proxy-l7f6k" Sep 9 00:19:54.495862 kubelet[2449]: I0909 00:19:54.495204 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/102d3534-5684-4468-9998-dfb590525263-hostproc\") pod \"cilium-h6w56\" (UID: \"102d3534-5684-4468-9998-dfb590525263\") " pod="kube-system/cilium-h6w56" Sep 9 00:19:54.495979 kubelet[2449]: I0909 00:19:54.495237 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/102d3534-5684-4468-9998-dfb590525263-cilium-config-path\") pod \"cilium-h6w56\" (UID: \"102d3534-5684-4468-9998-dfb590525263\") " pod="kube-system/cilium-h6w56" Sep 9 00:19:54.538483 systemd[1]: Created slice kubepods-besteffort-pod9ad3069b_e0f2_4278_95fc_cf229ad12f49.slice - libcontainer container kubepods-besteffort-pod9ad3069b_e0f2_4278_95fc_cf229ad12f49.slice. Sep 9 00:19:54.596752 kubelet[2449]: I0909 00:19:54.596233 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9ad3069b-e0f2-4278-95fc-cf229ad12f49-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-bwpvs\" (UID: \"9ad3069b-e0f2-4278-95fc-cf229ad12f49\") " pod="kube-system/cilium-operator-6c4d7847fc-bwpvs" Sep 9 00:19:54.596752 kubelet[2449]: I0909 00:19:54.596271 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgrxr\" (UniqueName: \"kubernetes.io/projected/9ad3069b-e0f2-4278-95fc-cf229ad12f49-kube-api-access-qgrxr\") pod \"cilium-operator-6c4d7847fc-bwpvs\" (UID: \"9ad3069b-e0f2-4278-95fc-cf229ad12f49\") " pod="kube-system/cilium-operator-6c4d7847fc-bwpvs" Sep 9 00:19:54.760536 kubelet[2449]: E0909 00:19:54.760428 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:54.761276 containerd[1444]: time="2025-09-09T00:19:54.761192625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l7f6k,Uid:30b64cf0-3faf-4134-9440-806366ba7fb9,Namespace:kube-system,Attempt:0,}" Sep 9 00:19:54.776205 kubelet[2449]: E0909 00:19:54.775668 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:54.776601 containerd[1444]: time="2025-09-09T00:19:54.776290790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h6w56,Uid:102d3534-5684-4468-9998-dfb590525263,Namespace:kube-system,Attempt:0,}" Sep 9 00:19:54.782315 containerd[1444]: time="2025-09-09T00:19:54.782232901Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:19:54.782395 containerd[1444]: time="2025-09-09T00:19:54.782286702Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:19:54.782451 containerd[1444]: time="2025-09-09T00:19:54.782297823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:19:54.782668 containerd[1444]: time="2025-09-09T00:19:54.782552627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:19:54.797674 containerd[1444]: time="2025-09-09T00:19:54.797594031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:19:54.797674 containerd[1444]: time="2025-09-09T00:19:54.797649712Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:19:54.797674 containerd[1444]: time="2025-09-09T00:19:54.797670552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:19:54.797885 containerd[1444]: time="2025-09-09T00:19:54.797774634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:19:54.801705 systemd[1]: Started cri-containerd-671531ad4b8137d3db44e08b84884cf71f0cc999bf8ceb26653931e1fff122e2.scope - libcontainer container 671531ad4b8137d3db44e08b84884cf71f0cc999bf8ceb26653931e1fff122e2. Sep 9 00:19:54.816801 systemd[1]: Started cri-containerd-4c521500aac2f488688ff3b2392aab2f93b4a718b655f7b184e8e0ed07804a52.scope - libcontainer container 4c521500aac2f488688ff3b2392aab2f93b4a718b655f7b184e8e0ed07804a52. Sep 9 00:19:54.834321 containerd[1444]: time="2025-09-09T00:19:54.834216920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l7f6k,Uid:30b64cf0-3faf-4134-9440-806366ba7fb9,Namespace:kube-system,Attempt:0,} returns sandbox id \"671531ad4b8137d3db44e08b84884cf71f0cc999bf8ceb26653931e1fff122e2\"" Sep 9 00:19:54.835442 kubelet[2449]: E0909 00:19:54.834952 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:54.839321 containerd[1444]: time="2025-09-09T00:19:54.839284535Z" level=info msg="CreateContainer within sandbox \"671531ad4b8137d3db44e08b84884cf71f0cc999bf8ceb26653931e1fff122e2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 00:19:54.841497 kubelet[2449]: E0909 00:19:54.841461 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:54.841911 containerd[1444]: time="2025-09-09T00:19:54.841886184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-bwpvs,Uid:9ad3069b-e0f2-4278-95fc-cf229ad12f49,Namespace:kube-system,Attempt:0,}" Sep 9 00:19:54.845768 containerd[1444]: time="2025-09-09T00:19:54.845730177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h6w56,Uid:102d3534-5684-4468-9998-dfb590525263,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c521500aac2f488688ff3b2392aab2f93b4a718b655f7b184e8e0ed07804a52\"" Sep 9 00:19:54.846429 kubelet[2449]: E0909 00:19:54.846376 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:54.847527 containerd[1444]: time="2025-09-09T00:19:54.847499130Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 9 00:19:54.862374 containerd[1444]: time="2025-09-09T00:19:54.862314009Z" level=info msg="CreateContainer within sandbox \"671531ad4b8137d3db44e08b84884cf71f0cc999bf8ceb26653931e1fff122e2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"37d4fd8bce26a60be01e68a9f2dbaaaa1a252e0a382673532fee1ef9f89819ff\"" Sep 9 00:19:54.863133 containerd[1444]: time="2025-09-09T00:19:54.863105144Z" level=info msg="StartContainer for \"37d4fd8bce26a60be01e68a9f2dbaaaa1a252e0a382673532fee1ef9f89819ff\"" Sep 9 00:19:54.869069 containerd[1444]: time="2025-09-09T00:19:54.868826811Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:19:54.869069 containerd[1444]: time="2025-09-09T00:19:54.868907933Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:19:54.869069 containerd[1444]: time="2025-09-09T00:19:54.868923453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:19:54.869327 containerd[1444]: time="2025-09-09T00:19:54.869266020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:19:54.888759 systemd[1]: Started cri-containerd-c394b3067b514f8c48bca38f3fda7400d7eba2f000df6b856b45385ff9432e2e.scope - libcontainer container c394b3067b514f8c48bca38f3fda7400d7eba2f000df6b856b45385ff9432e2e. Sep 9 00:19:54.891487 systemd[1]: Started cri-containerd-37d4fd8bce26a60be01e68a9f2dbaaaa1a252e0a382673532fee1ef9f89819ff.scope - libcontainer container 37d4fd8bce26a60be01e68a9f2dbaaaa1a252e0a382673532fee1ef9f89819ff. Sep 9 00:19:54.924474 containerd[1444]: time="2025-09-09T00:19:54.924388817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-bwpvs,Uid:9ad3069b-e0f2-4278-95fc-cf229ad12f49,Namespace:kube-system,Attempt:0,} returns sandbox id \"c394b3067b514f8c48bca38f3fda7400d7eba2f000df6b856b45385ff9432e2e\"" Sep 9 00:19:54.924474 containerd[1444]: time="2025-09-09T00:19:54.924417498Z" level=info msg="StartContainer for \"37d4fd8bce26a60be01e68a9f2dbaaaa1a252e0a382673532fee1ef9f89819ff\" returns successfully" Sep 9 00:19:54.926788 kubelet[2449]: E0909 00:19:54.926764 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:55.781103 kubelet[2449]: E0909 00:19:55.781056 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:55.793590 kubelet[2449]: I0909 00:19:55.791082 2449 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-l7f6k" podStartSLOduration=1.791064601 podStartE2EDuration="1.791064601s" podCreationTimestamp="2025-09-09 00:19:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:19:55.789689337 +0000 UTC m=+9.114544327" watchObservedRunningTime="2025-09-09 00:19:55.791064601 +0000 UTC m=+9.115919551" Sep 9 00:20:00.913781 kubelet[2449]: E0909 00:20:00.913690 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:01.904844 kubelet[2449]: E0909 00:20:01.904022 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:02.795238 kubelet[2449]: E0909 00:20:02.795209 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:03.127705 update_engine[1427]: I20250909 00:20:03.127547 1427 update_attempter.cc:509] Updating boot flags... Sep 9 00:20:03.194679 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2827) Sep 9 00:20:03.237612 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2828) Sep 9 00:20:06.420779 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1098373786.mount: Deactivated successfully. Sep 9 00:20:07.708936 containerd[1444]: time="2025-09-09T00:20:07.708875501Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 9 00:20:07.711399 containerd[1444]: time="2025-09-09T00:20:07.711350484Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 12.863794193s" Sep 9 00:20:07.711399 containerd[1444]: time="2025-09-09T00:20:07.711395845Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 9 00:20:07.715391 containerd[1444]: time="2025-09-09T00:20:07.715196281Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 9 00:20:07.716596 containerd[1444]: time="2025-09-09T00:20:07.716382612Z" level=info msg="CreateContainer within sandbox \"4c521500aac2f488688ff3b2392aab2f93b4a718b655f7b184e8e0ed07804a52\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 00:20:07.748492 containerd[1444]: time="2025-09-09T00:20:07.748415558Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:07.749481 containerd[1444]: time="2025-09-09T00:20:07.749440368Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:07.771728 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2985288002.mount: Deactivated successfully. Sep 9 00:20:07.773791 containerd[1444]: time="2025-09-09T00:20:07.773751920Z" level=info msg="CreateContainer within sandbox \"4c521500aac2f488688ff3b2392aab2f93b4a718b655f7b184e8e0ed07804a52\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a0a833cc69a1cde7c5f048b53d471e0d2c97ee9327b14c6d6d5d19d07323ea9b\"" Sep 9 00:20:07.774370 containerd[1444]: time="2025-09-09T00:20:07.774333805Z" level=info msg="StartContainer for \"a0a833cc69a1cde7c5f048b53d471e0d2c97ee9327b14c6d6d5d19d07323ea9b\"" Sep 9 00:20:07.802118 systemd[1]: Started cri-containerd-a0a833cc69a1cde7c5f048b53d471e0d2c97ee9327b14c6d6d5d19d07323ea9b.scope - libcontainer container a0a833cc69a1cde7c5f048b53d471e0d2c97ee9327b14c6d6d5d19d07323ea9b. Sep 9 00:20:07.823186 containerd[1444]: time="2025-09-09T00:20:07.823147231Z" level=info msg="StartContainer for \"a0a833cc69a1cde7c5f048b53d471e0d2c97ee9327b14c6d6d5d19d07323ea9b\" returns successfully" Sep 9 00:20:07.832433 systemd[1]: cri-containerd-a0a833cc69a1cde7c5f048b53d471e0d2c97ee9327b14c6d6d5d19d07323ea9b.scope: Deactivated successfully. Sep 9 00:20:08.024521 containerd[1444]: time="2025-09-09T00:20:08.017489918Z" level=info msg="shim disconnected" id=a0a833cc69a1cde7c5f048b53d471e0d2c97ee9327b14c6d6d5d19d07323ea9b namespace=k8s.io Sep 9 00:20:08.024521 containerd[1444]: time="2025-09-09T00:20:08.024414901Z" level=warning msg="cleaning up after shim disconnected" id=a0a833cc69a1cde7c5f048b53d471e0d2c97ee9327b14c6d6d5d19d07323ea9b namespace=k8s.io Sep 9 00:20:08.024521 containerd[1444]: time="2025-09-09T00:20:08.024428141Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:20:08.759048 systemd[1]: run-containerd-runc-k8s.io-a0a833cc69a1cde7c5f048b53d471e0d2c97ee9327b14c6d6d5d19d07323ea9b-runc.ay2y2H.mount: Deactivated successfully. Sep 9 00:20:08.759129 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a0a833cc69a1cde7c5f048b53d471e0d2c97ee9327b14c6d6d5d19d07323ea9b-rootfs.mount: Deactivated successfully. Sep 9 00:20:08.818340 kubelet[2449]: E0909 00:20:08.818286 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:08.820000 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4256663054.mount: Deactivated successfully. Sep 9 00:20:08.822943 containerd[1444]: time="2025-09-09T00:20:08.822546884Z" level=info msg="CreateContainer within sandbox \"4c521500aac2f488688ff3b2392aab2f93b4a718b655f7b184e8e0ed07804a52\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 00:20:08.844769 containerd[1444]: time="2025-09-09T00:20:08.843940878Z" level=info msg="CreateContainer within sandbox \"4c521500aac2f488688ff3b2392aab2f93b4a718b655f7b184e8e0ed07804a52\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"81e43b2a9bf7a70f7a71df8ad6f358d17c3c88361f73a8a5d02d72cb95e488ce\"" Sep 9 00:20:08.846150 containerd[1444]: time="2025-09-09T00:20:08.845974177Z" level=info msg="StartContainer for \"81e43b2a9bf7a70f7a71df8ad6f358d17c3c88361f73a8a5d02d72cb95e488ce\"" Sep 9 00:20:08.887750 systemd[1]: Started cri-containerd-81e43b2a9bf7a70f7a71df8ad6f358d17c3c88361f73a8a5d02d72cb95e488ce.scope - libcontainer container 81e43b2a9bf7a70f7a71df8ad6f358d17c3c88361f73a8a5d02d72cb95e488ce. Sep 9 00:20:08.934135 containerd[1444]: time="2025-09-09T00:20:08.934016498Z" level=info msg="StartContainer for \"81e43b2a9bf7a70f7a71df8ad6f358d17c3c88361f73a8a5d02d72cb95e488ce\" returns successfully" Sep 9 00:20:08.945378 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 00:20:08.945608 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:20:08.945672 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:20:08.953875 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:20:08.954109 systemd[1]: cri-containerd-81e43b2a9bf7a70f7a71df8ad6f358d17c3c88361f73a8a5d02d72cb95e488ce.scope: Deactivated successfully. Sep 9 00:20:08.968688 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:20:08.992606 containerd[1444]: time="2025-09-09T00:20:08.992403869Z" level=info msg="shim disconnected" id=81e43b2a9bf7a70f7a71df8ad6f358d17c3c88361f73a8a5d02d72cb95e488ce namespace=k8s.io Sep 9 00:20:08.992606 containerd[1444]: time="2025-09-09T00:20:08.992470110Z" level=warning msg="cleaning up after shim disconnected" id=81e43b2a9bf7a70f7a71df8ad6f358d17c3c88361f73a8a5d02d72cb95e488ce namespace=k8s.io Sep 9 00:20:08.992606 containerd[1444]: time="2025-09-09T00:20:08.992482350Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:20:09.426253 containerd[1444]: time="2025-09-09T00:20:09.426209001Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:09.426819 containerd[1444]: time="2025-09-09T00:20:09.426773286Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 9 00:20:09.427617 containerd[1444]: time="2025-09-09T00:20:09.427559173Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:09.431813 containerd[1444]: time="2025-09-09T00:20:09.431763769Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.716529888s" Sep 9 00:20:09.431813 containerd[1444]: time="2025-09-09T00:20:09.431808169Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 9 00:20:09.434893 containerd[1444]: time="2025-09-09T00:20:09.434853476Z" level=info msg="CreateContainer within sandbox \"c394b3067b514f8c48bca38f3fda7400d7eba2f000df6b856b45385ff9432e2e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 9 00:20:09.444055 containerd[1444]: time="2025-09-09T00:20:09.443995395Z" level=info msg="CreateContainer within sandbox \"c394b3067b514f8c48bca38f3fda7400d7eba2f000df6b856b45385ff9432e2e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d58e19b2fdcf95bf616c13fb69e68044f928d1c958e58bce80eb874a104a6705\"" Sep 9 00:20:09.444607 containerd[1444]: time="2025-09-09T00:20:09.444582480Z" level=info msg="StartContainer for \"d58e19b2fdcf95bf616c13fb69e68044f928d1c958e58bce80eb874a104a6705\"" Sep 9 00:20:09.468774 systemd[1]: Started cri-containerd-d58e19b2fdcf95bf616c13fb69e68044f928d1c958e58bce80eb874a104a6705.scope - libcontainer container d58e19b2fdcf95bf616c13fb69e68044f928d1c958e58bce80eb874a104a6705. Sep 9 00:20:09.488015 containerd[1444]: time="2025-09-09T00:20:09.487974617Z" level=info msg="StartContainer for \"d58e19b2fdcf95bf616c13fb69e68044f928d1c958e58bce80eb874a104a6705\" returns successfully" Sep 9 00:20:09.759721 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81e43b2a9bf7a70f7a71df8ad6f358d17c3c88361f73a8a5d02d72cb95e488ce-rootfs.mount: Deactivated successfully. Sep 9 00:20:09.822723 kubelet[2449]: E0909 00:20:09.822685 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:09.827348 kubelet[2449]: E0909 00:20:09.827308 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:09.830174 containerd[1444]: time="2025-09-09T00:20:09.830131229Z" level=info msg="CreateContainer within sandbox \"4c521500aac2f488688ff3b2392aab2f93b4a718b655f7b184e8e0ed07804a52\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 00:20:09.832261 kubelet[2449]: I0909 00:20:09.832192 2449 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-bwpvs" podStartSLOduration=1.3267883409999999 podStartE2EDuration="15.832177287s" podCreationTimestamp="2025-09-09 00:19:54 +0000 UTC" firstStartedPulling="2025-09-09 00:19:54.927240191 +0000 UTC m=+8.252095141" lastFinishedPulling="2025-09-09 00:20:09.432629097 +0000 UTC m=+22.757484087" observedRunningTime="2025-09-09 00:20:09.832098126 +0000 UTC m=+23.156953156" watchObservedRunningTime="2025-09-09 00:20:09.832177287 +0000 UTC m=+23.157032277" Sep 9 00:20:09.853880 containerd[1444]: time="2025-09-09T00:20:09.853822075Z" level=info msg="CreateContainer within sandbox \"4c521500aac2f488688ff3b2392aab2f93b4a718b655f7b184e8e0ed07804a52\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9ffc67c52f9487fcd3d7bb61641f64e63aedbf96d38d338eb6bd71798b9dc40c\"" Sep 9 00:20:09.855324 containerd[1444]: time="2025-09-09T00:20:09.855296848Z" level=info msg="StartContainer for \"9ffc67c52f9487fcd3d7bb61641f64e63aedbf96d38d338eb6bd71798b9dc40c\"" Sep 9 00:20:09.889737 systemd[1]: Started cri-containerd-9ffc67c52f9487fcd3d7bb61641f64e63aedbf96d38d338eb6bd71798b9dc40c.scope - libcontainer container 9ffc67c52f9487fcd3d7bb61641f64e63aedbf96d38d338eb6bd71798b9dc40c. Sep 9 00:20:09.918488 containerd[1444]: time="2025-09-09T00:20:09.918440036Z" level=info msg="StartContainer for \"9ffc67c52f9487fcd3d7bb61641f64e63aedbf96d38d338eb6bd71798b9dc40c\" returns successfully" Sep 9 00:20:09.920235 systemd[1]: cri-containerd-9ffc67c52f9487fcd3d7bb61641f64e63aedbf96d38d338eb6bd71798b9dc40c.scope: Deactivated successfully. Sep 9 00:20:09.940903 containerd[1444]: time="2025-09-09T00:20:09.940835831Z" level=info msg="shim disconnected" id=9ffc67c52f9487fcd3d7bb61641f64e63aedbf96d38d338eb6bd71798b9dc40c namespace=k8s.io Sep 9 00:20:09.940903 containerd[1444]: time="2025-09-09T00:20:09.940899752Z" level=warning msg="cleaning up after shim disconnected" id=9ffc67c52f9487fcd3d7bb61641f64e63aedbf96d38d338eb6bd71798b9dc40c namespace=k8s.io Sep 9 00:20:09.940903 containerd[1444]: time="2025-09-09T00:20:09.940908272Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:20:10.758929 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9ffc67c52f9487fcd3d7bb61641f64e63aedbf96d38d338eb6bd71798b9dc40c-rootfs.mount: Deactivated successfully. Sep 9 00:20:10.831311 kubelet[2449]: E0909 00:20:10.830789 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:10.831311 kubelet[2449]: E0909 00:20:10.830936 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:10.833196 containerd[1444]: time="2025-09-09T00:20:10.833156099Z" level=info msg="CreateContainer within sandbox \"4c521500aac2f488688ff3b2392aab2f93b4a718b655f7b184e8e0ed07804a52\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 00:20:10.847505 containerd[1444]: time="2025-09-09T00:20:10.847435617Z" level=info msg="CreateContainer within sandbox \"4c521500aac2f488688ff3b2392aab2f93b4a718b655f7b184e8e0ed07804a52\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"090629808dd10af0ab367f36111bf7ea057975920cd96d94ff7c5a3c476bea71\"" Sep 9 00:20:10.848216 containerd[1444]: time="2025-09-09T00:20:10.848114983Z" level=info msg="StartContainer for \"090629808dd10af0ab367f36111bf7ea057975920cd96d94ff7c5a3c476bea71\"" Sep 9 00:20:10.880747 systemd[1]: Started cri-containerd-090629808dd10af0ab367f36111bf7ea057975920cd96d94ff7c5a3c476bea71.scope - libcontainer container 090629808dd10af0ab367f36111bf7ea057975920cd96d94ff7c5a3c476bea71. Sep 9 00:20:10.899041 systemd[1]: cri-containerd-090629808dd10af0ab367f36111bf7ea057975920cd96d94ff7c5a3c476bea71.scope: Deactivated successfully. Sep 9 00:20:10.900775 containerd[1444]: time="2025-09-09T00:20:10.900739780Z" level=info msg="StartContainer for \"090629808dd10af0ab367f36111bf7ea057975920cd96d94ff7c5a3c476bea71\" returns successfully" Sep 9 00:20:10.920697 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-090629808dd10af0ab367f36111bf7ea057975920cd96d94ff7c5a3c476bea71-rootfs.mount: Deactivated successfully. Sep 9 00:20:10.927296 containerd[1444]: time="2025-09-09T00:20:10.927178559Z" level=info msg="shim disconnected" id=090629808dd10af0ab367f36111bf7ea057975920cd96d94ff7c5a3c476bea71 namespace=k8s.io Sep 9 00:20:10.927296 containerd[1444]: time="2025-09-09T00:20:10.927242519Z" level=warning msg="cleaning up after shim disconnected" id=090629808dd10af0ab367f36111bf7ea057975920cd96d94ff7c5a3c476bea71 namespace=k8s.io Sep 9 00:20:10.927296 containerd[1444]: time="2025-09-09T00:20:10.927251160Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:20:11.834513 kubelet[2449]: E0909 00:20:11.834327 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:11.837372 containerd[1444]: time="2025-09-09T00:20:11.837320328Z" level=info msg="CreateContainer within sandbox \"4c521500aac2f488688ff3b2392aab2f93b4a718b655f7b184e8e0ed07804a52\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 00:20:11.853010 containerd[1444]: time="2025-09-09T00:20:11.851234999Z" level=info msg="CreateContainer within sandbox \"4c521500aac2f488688ff3b2392aab2f93b4a718b655f7b184e8e0ed07804a52\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"868bf30759af2df05400d5daf9bcf03ae2400eaab114d304f11fde4eb79f2f0c\"" Sep 9 00:20:11.853010 containerd[1444]: time="2025-09-09T00:20:11.851665122Z" level=info msg="StartContainer for \"868bf30759af2df05400d5daf9bcf03ae2400eaab114d304f11fde4eb79f2f0c\"" Sep 9 00:20:11.878743 systemd[1]: Started cri-containerd-868bf30759af2df05400d5daf9bcf03ae2400eaab114d304f11fde4eb79f2f0c.scope - libcontainer container 868bf30759af2df05400d5daf9bcf03ae2400eaab114d304f11fde4eb79f2f0c. Sep 9 00:20:11.904028 containerd[1444]: time="2025-09-09T00:20:11.903918217Z" level=info msg="StartContainer for \"868bf30759af2df05400d5daf9bcf03ae2400eaab114d304f11fde4eb79f2f0c\" returns successfully" Sep 9 00:20:11.987730 kubelet[2449]: I0909 00:20:11.987688 2449 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 9 00:20:12.020100 systemd[1]: Created slice kubepods-burstable-pod89ade9a5_73f1_4e46_aed3_a0dde992e4b3.slice - libcontainer container kubepods-burstable-pod89ade9a5_73f1_4e46_aed3_a0dde992e4b3.slice. Sep 9 00:20:12.026980 systemd[1]: Created slice kubepods-burstable-pod998979fa_2d0d_4325_a602_26a025f1636b.slice - libcontainer container kubepods-burstable-pod998979fa_2d0d_4325_a602_26a025f1636b.slice. Sep 9 00:20:12.112226 kubelet[2449]: I0909 00:20:12.112109 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/89ade9a5-73f1-4e46-aed3-a0dde992e4b3-config-volume\") pod \"coredns-668d6bf9bc-qs2gn\" (UID: \"89ade9a5-73f1-4e46-aed3-a0dde992e4b3\") " pod="kube-system/coredns-668d6bf9bc-qs2gn" Sep 9 00:20:12.112226 kubelet[2449]: I0909 00:20:12.112152 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/998979fa-2d0d-4325-a602-26a025f1636b-config-volume\") pod \"coredns-668d6bf9bc-zp5tp\" (UID: \"998979fa-2d0d-4325-a602-26a025f1636b\") " pod="kube-system/coredns-668d6bf9bc-zp5tp" Sep 9 00:20:12.112226 kubelet[2449]: I0909 00:20:12.112184 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqx24\" (UniqueName: \"kubernetes.io/projected/89ade9a5-73f1-4e46-aed3-a0dde992e4b3-kube-api-access-lqx24\") pod \"coredns-668d6bf9bc-qs2gn\" (UID: \"89ade9a5-73f1-4e46-aed3-a0dde992e4b3\") " pod="kube-system/coredns-668d6bf9bc-qs2gn" Sep 9 00:20:12.112226 kubelet[2449]: I0909 00:20:12.112203 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8j5g\" (UniqueName: \"kubernetes.io/projected/998979fa-2d0d-4325-a602-26a025f1636b-kube-api-access-r8j5g\") pod \"coredns-668d6bf9bc-zp5tp\" (UID: \"998979fa-2d0d-4325-a602-26a025f1636b\") " pod="kube-system/coredns-668d6bf9bc-zp5tp" Sep 9 00:20:12.324172 kubelet[2449]: E0909 00:20:12.324137 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:12.324947 containerd[1444]: time="2025-09-09T00:20:12.324913407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qs2gn,Uid:89ade9a5-73f1-4e46-aed3-a0dde992e4b3,Namespace:kube-system,Attempt:0,}" Sep 9 00:20:12.330694 kubelet[2449]: E0909 00:20:12.330658 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:12.332964 containerd[1444]: time="2025-09-09T00:20:12.332919827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zp5tp,Uid:998979fa-2d0d-4325-a602-26a025f1636b,Namespace:kube-system,Attempt:0,}" Sep 9 00:20:12.842036 kubelet[2449]: E0909 00:20:12.841923 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:12.853840 kubelet[2449]: I0909 00:20:12.853780 2449 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-h6w56" podStartSLOduration=5.985686662 podStartE2EDuration="18.853763102s" podCreationTimestamp="2025-09-09 00:19:54 +0000 UTC" firstStartedPulling="2025-09-09 00:19:54.84696984 +0000 UTC m=+8.171824830" lastFinishedPulling="2025-09-09 00:20:07.71504628 +0000 UTC m=+21.039901270" observedRunningTime="2025-09-09 00:20:12.853618541 +0000 UTC m=+26.178473611" watchObservedRunningTime="2025-09-09 00:20:12.853763102 +0000 UTC m=+26.178618092" Sep 9 00:20:13.842447 kubelet[2449]: E0909 00:20:13.840849 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:13.952339 systemd-networkd[1373]: cilium_host: Link UP Sep 9 00:20:13.952469 systemd-networkd[1373]: cilium_net: Link UP Sep 9 00:20:13.955685 systemd-networkd[1373]: cilium_net: Gained carrier Sep 9 00:20:13.956479 systemd-networkd[1373]: cilium_host: Gained carrier Sep 9 00:20:13.961245 systemd-networkd[1373]: cilium_net: Gained IPv6LL Sep 9 00:20:13.961410 systemd-networkd[1373]: cilium_host: Gained IPv6LL Sep 9 00:20:14.052487 systemd-networkd[1373]: cilium_vxlan: Link UP Sep 9 00:20:14.052494 systemd-networkd[1373]: cilium_vxlan: Gained carrier Sep 9 00:20:14.271993 systemd[1]: Started sshd@7-10.0.0.53:22-10.0.0.1:56582.service - OpenSSH per-connection server daemon (10.0.0.1:56582). Sep 9 00:20:14.312379 sshd[3399]: Accepted publickey for core from 10.0.0.1 port 56582 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:20:14.313845 sshd[3399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:20:14.320152 systemd-logind[1422]: New session 8 of user core. Sep 9 00:20:14.326613 kernel: NET: Registered PF_ALG protocol family Sep 9 00:20:14.331757 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 00:20:14.459287 sshd[3399]: pam_unix(sshd:session): session closed for user core Sep 9 00:20:14.462663 systemd-logind[1422]: Session 8 logged out. Waiting for processes to exit. Sep 9 00:20:14.462967 systemd[1]: sshd@7-10.0.0.53:22-10.0.0.1:56582.service: Deactivated successfully. Sep 9 00:20:14.464543 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 00:20:14.465330 systemd-logind[1422]: Removed session 8. Sep 9 00:20:14.843121 kubelet[2449]: E0909 00:20:14.843039 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:14.884265 systemd-networkd[1373]: lxc_health: Link UP Sep 9 00:20:14.891174 systemd-networkd[1373]: lxc_health: Gained carrier Sep 9 00:20:15.387633 systemd-networkd[1373]: lxc90433f7c6d19: Link UP Sep 9 00:20:15.394606 kernel: eth0: renamed from tmp1db03 Sep 9 00:20:15.403173 systemd-networkd[1373]: lxc90433f7c6d19: Gained carrier Sep 9 00:20:15.404083 systemd-networkd[1373]: lxc748cc72e7e4f: Link UP Sep 9 00:20:15.411604 kernel: eth0: renamed from tmp7dd19 Sep 9 00:20:15.418102 systemd-networkd[1373]: lxc748cc72e7e4f: Gained carrier Sep 9 00:20:15.750751 systemd-networkd[1373]: cilium_vxlan: Gained IPv6LL Sep 9 00:20:16.262748 systemd-networkd[1373]: lxc_health: Gained IPv6LL Sep 9 00:20:16.787819 kubelet[2449]: E0909 00:20:16.787320 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:16.966750 systemd-networkd[1373]: lxc90433f7c6d19: Gained IPv6LL Sep 9 00:20:17.095095 systemd-networkd[1373]: lxc748cc72e7e4f: Gained IPv6LL Sep 9 00:20:18.949186 containerd[1444]: time="2025-09-09T00:20:18.949065456Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:20:18.949186 containerd[1444]: time="2025-09-09T00:20:18.949149617Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:20:18.949186 containerd[1444]: time="2025-09-09T00:20:18.949169657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:20:18.949714 containerd[1444]: time="2025-09-09T00:20:18.949258818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:20:18.967485 containerd[1444]: time="2025-09-09T00:20:18.967161204Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:20:18.967485 containerd[1444]: time="2025-09-09T00:20:18.967211764Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:20:18.967485 containerd[1444]: time="2025-09-09T00:20:18.967233085Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:20:18.967485 containerd[1444]: time="2025-09-09T00:20:18.967307925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:20:18.973920 systemd[1]: Started cri-containerd-7dd1973e50aa5fb602021f08ddddc4f53098df8b030e33d18bfbe2dc218d77ad.scope - libcontainer container 7dd1973e50aa5fb602021f08ddddc4f53098df8b030e33d18bfbe2dc218d77ad. Sep 9 00:20:18.984168 systemd[1]: Started cri-containerd-1db03e9456226b6d0e1ac18ba80de47082b1ef97d805c2f8b8c1fdf1bda7a889.scope - libcontainer container 1db03e9456226b6d0e1ac18ba80de47082b1ef97d805c2f8b8c1fdf1bda7a889. Sep 9 00:20:18.993179 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:20:18.994471 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:20:19.014695 containerd[1444]: time="2025-09-09T00:20:19.014601404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zp5tp,Uid:998979fa-2d0d-4325-a602-26a025f1636b,Namespace:kube-system,Attempt:0,} returns sandbox id \"1db03e9456226b6d0e1ac18ba80de47082b1ef97d805c2f8b8c1fdf1bda7a889\"" Sep 9 00:20:19.016057 kubelet[2449]: E0909 00:20:19.016029 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:19.019598 containerd[1444]: time="2025-09-09T00:20:19.019509152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qs2gn,Uid:89ade9a5-73f1-4e46-aed3-a0dde992e4b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"7dd1973e50aa5fb602021f08ddddc4f53098df8b030e33d18bfbe2dc218d77ad\"" Sep 9 00:20:19.019913 containerd[1444]: time="2025-09-09T00:20:19.019853034Z" level=info msg="CreateContainer within sandbox \"1db03e9456226b6d0e1ac18ba80de47082b1ef97d805c2f8b8c1fdf1bda7a889\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 00:20:19.020114 kubelet[2449]: E0909 00:20:19.020002 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:19.021630 containerd[1444]: time="2025-09-09T00:20:19.021604444Z" level=info msg="CreateContainer within sandbox \"7dd1973e50aa5fb602021f08ddddc4f53098df8b030e33d18bfbe2dc218d77ad\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 00:20:19.033175 containerd[1444]: time="2025-09-09T00:20:19.033043389Z" level=info msg="CreateContainer within sandbox \"1db03e9456226b6d0e1ac18ba80de47082b1ef97d805c2f8b8c1fdf1bda7a889\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1eb38719f3a8bf934ebd0a33186035e771b1e5f7058fe9f237e9720bf3f03f36\"" Sep 9 00:20:19.033538 containerd[1444]: time="2025-09-09T00:20:19.033513752Z" level=info msg="StartContainer for \"1eb38719f3a8bf934ebd0a33186035e771b1e5f7058fe9f237e9720bf3f03f36\"" Sep 9 00:20:19.038760 containerd[1444]: time="2025-09-09T00:20:19.038722342Z" level=info msg="CreateContainer within sandbox \"7dd1973e50aa5fb602021f08ddddc4f53098df8b030e33d18bfbe2dc218d77ad\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1ea7c4a557bc2ced90d23e8c94ec49844f3452f671d7264eb679652cec975c86\"" Sep 9 00:20:19.039365 containerd[1444]: time="2025-09-09T00:20:19.039202425Z" level=info msg="StartContainer for \"1ea7c4a557bc2ced90d23e8c94ec49844f3452f671d7264eb679652cec975c86\"" Sep 9 00:20:19.061725 systemd[1]: Started cri-containerd-1eb38719f3a8bf934ebd0a33186035e771b1e5f7058fe9f237e9720bf3f03f36.scope - libcontainer container 1eb38719f3a8bf934ebd0a33186035e771b1e5f7058fe9f237e9720bf3f03f36. Sep 9 00:20:19.063963 systemd[1]: Started cri-containerd-1ea7c4a557bc2ced90d23e8c94ec49844f3452f671d7264eb679652cec975c86.scope - libcontainer container 1ea7c4a557bc2ced90d23e8c94ec49844f3452f671d7264eb679652cec975c86. Sep 9 00:20:19.085807 containerd[1444]: time="2025-09-09T00:20:19.085771972Z" level=info msg="StartContainer for \"1eb38719f3a8bf934ebd0a33186035e771b1e5f7058fe9f237e9720bf3f03f36\" returns successfully" Sep 9 00:20:19.089611 containerd[1444]: time="2025-09-09T00:20:19.089580834Z" level=info msg="StartContainer for \"1ea7c4a557bc2ced90d23e8c94ec49844f3452f671d7264eb679652cec975c86\" returns successfully" Sep 9 00:20:19.471466 systemd[1]: Started sshd@8-10.0.0.53:22-10.0.0.1:56598.service - OpenSSH per-connection server daemon (10.0.0.1:56598). Sep 9 00:20:19.518008 sshd[3869]: Accepted publickey for core from 10.0.0.1 port 56598 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:20:19.519346 sshd[3869]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:20:19.523480 systemd-logind[1422]: New session 9 of user core. Sep 9 00:20:19.530710 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 00:20:19.640738 sshd[3869]: pam_unix(sshd:session): session closed for user core Sep 9 00:20:19.644051 systemd[1]: sshd@8-10.0.0.53:22-10.0.0.1:56598.service: Deactivated successfully. Sep 9 00:20:19.645697 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 00:20:19.646316 systemd-logind[1422]: Session 9 logged out. Waiting for processes to exit. Sep 9 00:20:19.647138 systemd-logind[1422]: Removed session 9. Sep 9 00:20:19.855105 kubelet[2449]: E0909 00:20:19.853775 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:19.856087 kubelet[2449]: E0909 00:20:19.856051 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:19.864101 kubelet[2449]: I0909 00:20:19.864046 2449 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-zp5tp" podStartSLOduration=25.864031955 podStartE2EDuration="25.864031955s" podCreationTimestamp="2025-09-09 00:19:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:20:19.863378911 +0000 UTC m=+33.188233901" watchObservedRunningTime="2025-09-09 00:20:19.864031955 +0000 UTC m=+33.188886945" Sep 9 00:20:19.874526 kubelet[2449]: I0909 00:20:19.873267 2449 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-qs2gn" podStartSLOduration=25.873250848 podStartE2EDuration="25.873250848s" podCreationTimestamp="2025-09-09 00:19:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:20:19.873226207 +0000 UTC m=+33.198081197" watchObservedRunningTime="2025-09-09 00:20:19.873250848 +0000 UTC m=+33.198105798" Sep 9 00:20:20.858065 kubelet[2449]: E0909 00:20:20.857918 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:20.858065 kubelet[2449]: E0909 00:20:20.857986 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:21.859470 kubelet[2449]: E0909 00:20:21.859150 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:21.859470 kubelet[2449]: E0909 00:20:21.859317 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:22.519643 kubelet[2449]: I0909 00:20:22.519595 2449 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:20:22.520104 kubelet[2449]: E0909 00:20:22.520083 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:22.863677 kubelet[2449]: E0909 00:20:22.862407 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:24.653838 systemd[1]: Started sshd@9-10.0.0.53:22-10.0.0.1:37182.service - OpenSSH per-connection server daemon (10.0.0.1:37182). Sep 9 00:20:24.703743 sshd[3890]: Accepted publickey for core from 10.0.0.1 port 37182 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:20:24.705171 sshd[3890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:20:24.714957 systemd-logind[1422]: New session 10 of user core. Sep 9 00:20:24.724770 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 00:20:24.882509 sshd[3890]: pam_unix(sshd:session): session closed for user core Sep 9 00:20:24.886557 systemd[1]: sshd@9-10.0.0.53:22-10.0.0.1:37182.service: Deactivated successfully. Sep 9 00:20:24.888779 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 00:20:24.889352 systemd-logind[1422]: Session 10 logged out. Waiting for processes to exit. Sep 9 00:20:24.890749 systemd-logind[1422]: Removed session 10. Sep 9 00:20:29.895463 systemd[1]: Started sshd@10-10.0.0.53:22-10.0.0.1:37194.service - OpenSSH per-connection server daemon (10.0.0.1:37194). Sep 9 00:20:29.938293 sshd[3912]: Accepted publickey for core from 10.0.0.1 port 37194 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:20:29.939641 sshd[3912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:20:29.944194 systemd-logind[1422]: New session 11 of user core. Sep 9 00:20:29.954824 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 00:20:30.078494 sshd[3912]: pam_unix(sshd:session): session closed for user core Sep 9 00:20:30.091525 systemd[1]: sshd@10-10.0.0.53:22-10.0.0.1:37194.service: Deactivated successfully. Sep 9 00:20:30.093750 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 00:20:30.095139 systemd-logind[1422]: Session 11 logged out. Waiting for processes to exit. Sep 9 00:20:30.105933 systemd[1]: Started sshd@11-10.0.0.53:22-10.0.0.1:38876.service - OpenSSH per-connection server daemon (10.0.0.1:38876). Sep 9 00:20:30.106792 systemd-logind[1422]: Removed session 11. Sep 9 00:20:30.144079 sshd[3927]: Accepted publickey for core from 10.0.0.1 port 38876 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:20:30.145525 sshd[3927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:20:30.151824 systemd-logind[1422]: New session 12 of user core. Sep 9 00:20:30.161799 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 00:20:30.323363 sshd[3927]: pam_unix(sshd:session): session closed for user core Sep 9 00:20:30.338222 systemd[1]: sshd@11-10.0.0.53:22-10.0.0.1:38876.service: Deactivated successfully. Sep 9 00:20:30.342775 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 00:20:30.345888 systemd-logind[1422]: Session 12 logged out. Waiting for processes to exit. Sep 9 00:20:30.354966 systemd[1]: Started sshd@12-10.0.0.53:22-10.0.0.1:38888.service - OpenSSH per-connection server daemon (10.0.0.1:38888). Sep 9 00:20:30.355957 systemd-logind[1422]: Removed session 12. Sep 9 00:20:30.387850 sshd[3940]: Accepted publickey for core from 10.0.0.1 port 38888 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:20:30.389200 sshd[3940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:20:30.394212 systemd-logind[1422]: New session 13 of user core. Sep 9 00:20:30.401748 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 00:20:30.527660 sshd[3940]: pam_unix(sshd:session): session closed for user core Sep 9 00:20:30.530742 systemd-logind[1422]: Session 13 logged out. Waiting for processes to exit. Sep 9 00:20:30.532275 systemd[1]: sshd@12-10.0.0.53:22-10.0.0.1:38888.service: Deactivated successfully. Sep 9 00:20:30.535482 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 00:20:30.539487 systemd-logind[1422]: Removed session 13. Sep 9 00:20:35.543513 systemd[1]: Started sshd@13-10.0.0.53:22-10.0.0.1:38892.service - OpenSSH per-connection server daemon (10.0.0.1:38892). Sep 9 00:20:35.577261 sshd[3955]: Accepted publickey for core from 10.0.0.1 port 38892 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:20:35.578529 sshd[3955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:20:35.582464 systemd-logind[1422]: New session 14 of user core. Sep 9 00:20:35.592750 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 00:20:35.707039 sshd[3955]: pam_unix(sshd:session): session closed for user core Sep 9 00:20:35.710201 systemd[1]: sshd@13-10.0.0.53:22-10.0.0.1:38892.service: Deactivated successfully. Sep 9 00:20:35.713043 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 00:20:35.713611 systemd-logind[1422]: Session 14 logged out. Waiting for processes to exit. Sep 9 00:20:35.714342 systemd-logind[1422]: Removed session 14. Sep 9 00:20:40.718367 systemd[1]: Started sshd@14-10.0.0.53:22-10.0.0.1:48956.service - OpenSSH per-connection server daemon (10.0.0.1:48956). Sep 9 00:20:40.753456 sshd[3970]: Accepted publickey for core from 10.0.0.1 port 48956 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:20:40.755322 sshd[3970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:20:40.758799 systemd-logind[1422]: New session 15 of user core. Sep 9 00:20:40.772368 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 00:20:40.886400 sshd[3970]: pam_unix(sshd:session): session closed for user core Sep 9 00:20:40.899111 systemd[1]: sshd@14-10.0.0.53:22-10.0.0.1:48956.service: Deactivated successfully. Sep 9 00:20:40.900532 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 00:20:40.902320 systemd-logind[1422]: Session 15 logged out. Waiting for processes to exit. Sep 9 00:20:40.903234 systemd[1]: Started sshd@15-10.0.0.53:22-10.0.0.1:48972.service - OpenSSH per-connection server daemon (10.0.0.1:48972). Sep 9 00:20:40.904340 systemd-logind[1422]: Removed session 15. Sep 9 00:20:40.937408 sshd[3985]: Accepted publickey for core from 10.0.0.1 port 48972 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:20:40.938656 sshd[3985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:20:40.942550 systemd-logind[1422]: New session 16 of user core. Sep 9 00:20:40.951738 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 00:20:41.128063 sshd[3985]: pam_unix(sshd:session): session closed for user core Sep 9 00:20:41.138123 systemd[1]: sshd@15-10.0.0.53:22-10.0.0.1:48972.service: Deactivated successfully. Sep 9 00:20:41.140994 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 00:20:41.142322 systemd-logind[1422]: Session 16 logged out. Waiting for processes to exit. Sep 9 00:20:41.143521 systemd[1]: Started sshd@16-10.0.0.53:22-10.0.0.1:48978.service - OpenSSH per-connection server daemon (10.0.0.1:48978). Sep 9 00:20:41.144540 systemd-logind[1422]: Removed session 16. Sep 9 00:20:41.180644 sshd[3997]: Accepted publickey for core from 10.0.0.1 port 48978 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:20:41.182042 sshd[3997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:20:41.186084 systemd-logind[1422]: New session 17 of user core. Sep 9 00:20:41.200715 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 00:20:42.117552 sshd[3997]: pam_unix(sshd:session): session closed for user core Sep 9 00:20:42.126455 systemd[1]: sshd@16-10.0.0.53:22-10.0.0.1:48978.service: Deactivated successfully. Sep 9 00:20:42.128703 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 00:20:42.130061 systemd-logind[1422]: Session 17 logged out. Waiting for processes to exit. Sep 9 00:20:42.135961 systemd[1]: Started sshd@17-10.0.0.53:22-10.0.0.1:48982.service - OpenSSH per-connection server daemon (10.0.0.1:48982). Sep 9 00:20:42.139378 systemd-logind[1422]: Removed session 17. Sep 9 00:20:42.170356 sshd[4018]: Accepted publickey for core from 10.0.0.1 port 48982 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:20:42.171837 sshd[4018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:20:42.175272 systemd-logind[1422]: New session 18 of user core. Sep 9 00:20:42.182743 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 00:20:42.446054 sshd[4018]: pam_unix(sshd:session): session closed for user core Sep 9 00:20:42.456193 systemd[1]: sshd@17-10.0.0.53:22-10.0.0.1:48982.service: Deactivated successfully. Sep 9 00:20:42.457847 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 00:20:42.459413 systemd-logind[1422]: Session 18 logged out. Waiting for processes to exit. Sep 9 00:20:42.473861 systemd[1]: Started sshd@18-10.0.0.53:22-10.0.0.1:48996.service - OpenSSH per-connection server daemon (10.0.0.1:48996). Sep 9 00:20:42.475190 systemd-logind[1422]: Removed session 18. Sep 9 00:20:42.508353 sshd[4031]: Accepted publickey for core from 10.0.0.1 port 48996 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:20:42.509788 sshd[4031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:20:42.513789 systemd-logind[1422]: New session 19 of user core. Sep 9 00:20:42.523804 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 00:20:42.649814 sshd[4031]: pam_unix(sshd:session): session closed for user core Sep 9 00:20:42.653118 systemd[1]: sshd@18-10.0.0.53:22-10.0.0.1:48996.service: Deactivated successfully. Sep 9 00:20:42.655188 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 00:20:42.655923 systemd-logind[1422]: Session 19 logged out. Waiting for processes to exit. Sep 9 00:20:42.657808 systemd-logind[1422]: Removed session 19. Sep 9 00:20:47.669659 systemd[1]: Started sshd@19-10.0.0.53:22-10.0.0.1:49012.service - OpenSSH per-connection server daemon (10.0.0.1:49012). Sep 9 00:20:47.706273 sshd[4048]: Accepted publickey for core from 10.0.0.1 port 49012 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:20:47.706686 sshd[4048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:20:47.712658 systemd-logind[1422]: New session 20 of user core. Sep 9 00:20:47.723793 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 00:20:47.843425 sshd[4048]: pam_unix(sshd:session): session closed for user core Sep 9 00:20:47.847433 systemd[1]: sshd@19-10.0.0.53:22-10.0.0.1:49012.service: Deactivated successfully. Sep 9 00:20:47.849140 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 00:20:47.850393 systemd-logind[1422]: Session 20 logged out. Waiting for processes to exit. Sep 9 00:20:47.851949 systemd-logind[1422]: Removed session 20. Sep 9 00:20:52.854161 systemd[1]: Started sshd@20-10.0.0.53:22-10.0.0.1:57704.service - OpenSSH per-connection server daemon (10.0.0.1:57704). Sep 9 00:20:52.923674 sshd[4064]: Accepted publickey for core from 10.0.0.1 port 57704 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:20:52.926028 sshd[4064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:20:52.932362 systemd-logind[1422]: New session 21 of user core. Sep 9 00:20:52.940817 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 9 00:20:53.073810 sshd[4064]: pam_unix(sshd:session): session closed for user core Sep 9 00:20:53.077659 systemd[1]: sshd@20-10.0.0.53:22-10.0.0.1:57704.service: Deactivated successfully. Sep 9 00:20:53.079577 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 00:20:53.080290 systemd-logind[1422]: Session 21 logged out. Waiting for processes to exit. Sep 9 00:20:53.081247 systemd-logind[1422]: Removed session 21. Sep 9 00:20:58.084253 systemd[1]: Started sshd@21-10.0.0.53:22-10.0.0.1:57716.service - OpenSSH per-connection server daemon (10.0.0.1:57716). Sep 9 00:20:58.119977 sshd[4080]: Accepted publickey for core from 10.0.0.1 port 57716 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:20:58.121407 sshd[4080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:20:58.125881 systemd-logind[1422]: New session 22 of user core. Sep 9 00:20:58.135770 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 9 00:20:58.244376 sshd[4080]: pam_unix(sshd:session): session closed for user core Sep 9 00:20:58.247769 systemd[1]: sshd@21-10.0.0.53:22-10.0.0.1:57716.service: Deactivated successfully. Sep 9 00:20:58.250200 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 00:20:58.250916 systemd-logind[1422]: Session 22 logged out. Waiting for processes to exit. Sep 9 00:20:58.251695 systemd-logind[1422]: Removed session 22. Sep 9 00:21:03.260246 systemd[1]: Started sshd@22-10.0.0.53:22-10.0.0.1:39686.service - OpenSSH per-connection server daemon (10.0.0.1:39686). Sep 9 00:21:03.292764 sshd[4094]: Accepted publickey for core from 10.0.0.1 port 39686 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:21:03.294273 sshd[4094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:21:03.299677 systemd-logind[1422]: New session 23 of user core. Sep 9 00:21:03.310048 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 9 00:21:03.439446 sshd[4094]: pam_unix(sshd:session): session closed for user core Sep 9 00:21:03.451080 systemd[1]: sshd@22-10.0.0.53:22-10.0.0.1:39686.service: Deactivated successfully. Sep 9 00:21:03.452662 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 00:21:03.454017 systemd-logind[1422]: Session 23 logged out. Waiting for processes to exit. Sep 9 00:21:03.455868 systemd[1]: Started sshd@23-10.0.0.53:22-10.0.0.1:39700.service - OpenSSH per-connection server daemon (10.0.0.1:39700). Sep 9 00:21:03.458186 systemd-logind[1422]: Removed session 23. Sep 9 00:21:03.488930 sshd[4108]: Accepted publickey for core from 10.0.0.1 port 39700 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:21:03.490232 sshd[4108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:21:03.493859 systemd-logind[1422]: New session 24 of user core. Sep 9 00:21:03.501753 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 9 00:21:06.091866 systemd[1]: run-containerd-runc-k8s.io-868bf30759af2df05400d5daf9bcf03ae2400eaab114d304f11fde4eb79f2f0c-runc.YPUmZl.mount: Deactivated successfully. Sep 9 00:21:06.092263 containerd[1444]: time="2025-09-09T00:21:06.092233731Z" level=info msg="StopContainer for \"d58e19b2fdcf95bf616c13fb69e68044f928d1c958e58bce80eb874a104a6705\" with timeout 30 (s)" Sep 9 00:21:06.092887 containerd[1444]: time="2025-09-09T00:21:06.092828780Z" level=info msg="Stop container \"d58e19b2fdcf95bf616c13fb69e68044f928d1c958e58bce80eb874a104a6705\" with signal terminated" Sep 9 00:21:06.112694 systemd[1]: cri-containerd-d58e19b2fdcf95bf616c13fb69e68044f928d1c958e58bce80eb874a104a6705.scope: Deactivated successfully. Sep 9 00:21:06.115935 containerd[1444]: time="2025-09-09T00:21:06.115719461Z" level=info msg="StopContainer for \"868bf30759af2df05400d5daf9bcf03ae2400eaab114d304f11fde4eb79f2f0c\" with timeout 2 (s)" Sep 9 00:21:06.116826 containerd[1444]: time="2025-09-09T00:21:06.116735795Z" level=info msg="Stop container \"868bf30759af2df05400d5daf9bcf03ae2400eaab114d304f11fde4eb79f2f0c\" with signal terminated" Sep 9 00:21:06.125142 systemd-networkd[1373]: lxc_health: Link DOWN Sep 9 00:21:06.125148 systemd-networkd[1373]: lxc_health: Lost carrier Sep 9 00:21:06.142022 containerd[1444]: time="2025-09-09T00:21:06.141951989Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 00:21:06.148719 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d58e19b2fdcf95bf616c13fb69e68044f928d1c958e58bce80eb874a104a6705-rootfs.mount: Deactivated successfully. Sep 9 00:21:06.150734 systemd[1]: cri-containerd-868bf30759af2df05400d5daf9bcf03ae2400eaab114d304f11fde4eb79f2f0c.scope: Deactivated successfully. Sep 9 00:21:06.151001 systemd[1]: cri-containerd-868bf30759af2df05400d5daf9bcf03ae2400eaab114d304f11fde4eb79f2f0c.scope: Consumed 6.238s CPU time. Sep 9 00:21:06.159577 containerd[1444]: time="2025-09-09T00:21:06.159512235Z" level=info msg="shim disconnected" id=d58e19b2fdcf95bf616c13fb69e68044f928d1c958e58bce80eb874a104a6705 namespace=k8s.io Sep 9 00:21:06.159577 containerd[1444]: time="2025-09-09T00:21:06.159590396Z" level=warning msg="cleaning up after shim disconnected" id=d58e19b2fdcf95bf616c13fb69e68044f928d1c958e58bce80eb874a104a6705 namespace=k8s.io Sep 9 00:21:06.159577 containerd[1444]: time="2025-09-09T00:21:06.159600596Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:21:06.172420 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-868bf30759af2df05400d5daf9bcf03ae2400eaab114d304f11fde4eb79f2f0c-rootfs.mount: Deactivated successfully. Sep 9 00:21:06.179482 containerd[1444]: time="2025-09-09T00:21:06.179422314Z" level=info msg="shim disconnected" id=868bf30759af2df05400d5daf9bcf03ae2400eaab114d304f11fde4eb79f2f0c namespace=k8s.io Sep 9 00:21:06.179482 containerd[1444]: time="2025-09-09T00:21:06.179478915Z" level=warning msg="cleaning up after shim disconnected" id=868bf30759af2df05400d5daf9bcf03ae2400eaab114d304f11fde4eb79f2f0c namespace=k8s.io Sep 9 00:21:06.179482 containerd[1444]: time="2025-09-09T00:21:06.179487515Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:21:06.221824 containerd[1444]: time="2025-09-09T00:21:06.221780628Z" level=info msg="StopContainer for \"d58e19b2fdcf95bf616c13fb69e68044f928d1c958e58bce80eb874a104a6705\" returns successfully" Sep 9 00:21:06.222580 containerd[1444]: time="2025-09-09T00:21:06.222482838Z" level=info msg="StopPodSandbox for \"c394b3067b514f8c48bca38f3fda7400d7eba2f000df6b856b45385ff9432e2e\"" Sep 9 00:21:06.222580 containerd[1444]: time="2025-09-09T00:21:06.222534438Z" level=info msg="Container to stop \"d58e19b2fdcf95bf616c13fb69e68044f928d1c958e58bce80eb874a104a6705\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:21:06.224249 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c394b3067b514f8c48bca38f3fda7400d7eba2f000df6b856b45385ff9432e2e-shm.mount: Deactivated successfully. Sep 9 00:21:06.228743 systemd[1]: cri-containerd-c394b3067b514f8c48bca38f3fda7400d7eba2f000df6b856b45385ff9432e2e.scope: Deactivated successfully. Sep 9 00:21:06.239654 containerd[1444]: time="2025-09-09T00:21:06.239612878Z" level=info msg="StopContainer for \"868bf30759af2df05400d5daf9bcf03ae2400eaab114d304f11fde4eb79f2f0c\" returns successfully" Sep 9 00:21:06.240495 containerd[1444]: time="2025-09-09T00:21:06.240473850Z" level=info msg="StopPodSandbox for \"4c521500aac2f488688ff3b2392aab2f93b4a718b655f7b184e8e0ed07804a52\"" Sep 9 00:21:06.240672 containerd[1444]: time="2025-09-09T00:21:06.240648292Z" level=info msg="Container to stop \"81e43b2a9bf7a70f7a71df8ad6f358d17c3c88361f73a8a5d02d72cb95e488ce\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:21:06.240747 containerd[1444]: time="2025-09-09T00:21:06.240732134Z" level=info msg="Container to stop \"9ffc67c52f9487fcd3d7bb61641f64e63aedbf96d38d338eb6bd71798b9dc40c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:21:06.240816 containerd[1444]: time="2025-09-09T00:21:06.240792334Z" level=info msg="Container to stop \"090629808dd10af0ab367f36111bf7ea057975920cd96d94ff7c5a3c476bea71\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:21:06.240870 containerd[1444]: time="2025-09-09T00:21:06.240856855Z" level=info msg="Container to stop \"868bf30759af2df05400d5daf9bcf03ae2400eaab114d304f11fde4eb79f2f0c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:21:06.240937 containerd[1444]: time="2025-09-09T00:21:06.240918776Z" level=info msg="Container to stop \"a0a833cc69a1cde7c5f048b53d471e0d2c97ee9327b14c6d6d5d19d07323ea9b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:21:06.246233 systemd[1]: cri-containerd-4c521500aac2f488688ff3b2392aab2f93b4a718b655f7b184e8e0ed07804a52.scope: Deactivated successfully. Sep 9 00:21:06.266065 containerd[1444]: time="2025-09-09T00:21:06.265842646Z" level=info msg="shim disconnected" id=c394b3067b514f8c48bca38f3fda7400d7eba2f000df6b856b45385ff9432e2e namespace=k8s.io Sep 9 00:21:06.266065 containerd[1444]: time="2025-09-09T00:21:06.265907807Z" level=warning msg="cleaning up after shim disconnected" id=c394b3067b514f8c48bca38f3fda7400d7eba2f000df6b856b45385ff9432e2e namespace=k8s.io Sep 9 00:21:06.266065 containerd[1444]: time="2025-09-09T00:21:06.265917007Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:21:06.278080 containerd[1444]: time="2025-09-09T00:21:06.277921015Z" level=info msg="TearDown network for sandbox \"c394b3067b514f8c48bca38f3fda7400d7eba2f000df6b856b45385ff9432e2e\" successfully" Sep 9 00:21:06.278080 containerd[1444]: time="2025-09-09T00:21:06.277955976Z" level=info msg="StopPodSandbox for \"c394b3067b514f8c48bca38f3fda7400d7eba2f000df6b856b45385ff9432e2e\" returns successfully" Sep 9 00:21:06.283675 containerd[1444]: time="2025-09-09T00:21:06.283234890Z" level=info msg="shim disconnected" id=4c521500aac2f488688ff3b2392aab2f93b4a718b655f7b184e8e0ed07804a52 namespace=k8s.io Sep 9 00:21:06.283675 containerd[1444]: time="2025-09-09T00:21:06.283657816Z" level=warning msg="cleaning up after shim disconnected" id=4c521500aac2f488688ff3b2392aab2f93b4a718b655f7b184e8e0ed07804a52 namespace=k8s.io Sep 9 00:21:06.283881 containerd[1444]: time="2025-09-09T00:21:06.283672376Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:21:06.297657 containerd[1444]: time="2025-09-09T00:21:06.297611971Z" level=info msg="TearDown network for sandbox \"4c521500aac2f488688ff3b2392aab2f93b4a718b655f7b184e8e0ed07804a52\" successfully" Sep 9 00:21:06.297657 containerd[1444]: time="2025-09-09T00:21:06.297649892Z" level=info msg="StopPodSandbox for \"4c521500aac2f488688ff3b2392aab2f93b4a718b655f7b184e8e0ed07804a52\" returns successfully" Sep 9 00:21:06.483510 kubelet[2449]: I0909 00:21:06.483396 2449 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bc245\" (UniqueName: \"kubernetes.io/projected/102d3534-5684-4468-9998-dfb590525263-kube-api-access-bc245\") pod \"102d3534-5684-4468-9998-dfb590525263\" (UID: \"102d3534-5684-4468-9998-dfb590525263\") " Sep 9 00:21:06.483510 kubelet[2449]: I0909 00:21:06.483446 2449 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/102d3534-5684-4468-9998-dfb590525263-cilium-config-path\") pod \"102d3534-5684-4468-9998-dfb590525263\" (UID: \"102d3534-5684-4468-9998-dfb590525263\") " Sep 9 00:21:06.483510 kubelet[2449]: I0909 00:21:06.483466 2449 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/102d3534-5684-4468-9998-dfb590525263-bpf-maps\") pod \"102d3534-5684-4468-9998-dfb590525263\" (UID: \"102d3534-5684-4468-9998-dfb590525263\") " Sep 9 00:21:06.484272 kubelet[2449]: I0909 00:21:06.483969 2449 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/102d3534-5684-4468-9998-dfb590525263-etc-cni-netd\") pod \"102d3534-5684-4468-9998-dfb590525263\" (UID: \"102d3534-5684-4468-9998-dfb590525263\") " Sep 9 00:21:06.484272 kubelet[2449]: I0909 00:21:06.484002 2449 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9ad3069b-e0f2-4278-95fc-cf229ad12f49-cilium-config-path\") pod \"9ad3069b-e0f2-4278-95fc-cf229ad12f49\" (UID: \"9ad3069b-e0f2-4278-95fc-cf229ad12f49\") " Sep 9 00:21:06.484272 kubelet[2449]: I0909 00:21:06.484033 2449 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/102d3534-5684-4468-9998-dfb590525263-xtables-lock\") pod \"102d3534-5684-4468-9998-dfb590525263\" (UID: \"102d3534-5684-4468-9998-dfb590525263\") " Sep 9 00:21:06.484272 kubelet[2449]: I0909 00:21:06.484051 2449 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/102d3534-5684-4468-9998-dfb590525263-host-proc-sys-net\") pod \"102d3534-5684-4468-9998-dfb590525263\" (UID: \"102d3534-5684-4468-9998-dfb590525263\") " Sep 9 00:21:06.484272 kubelet[2449]: I0909 00:21:06.484066 2449 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/102d3534-5684-4468-9998-dfb590525263-hostproc\") pod \"102d3534-5684-4468-9998-dfb590525263\" (UID: \"102d3534-5684-4468-9998-dfb590525263\") " Sep 9 00:21:06.484272 kubelet[2449]: I0909 00:21:06.484081 2449 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/102d3534-5684-4468-9998-dfb590525263-cilium-cgroup\") pod \"102d3534-5684-4468-9998-dfb590525263\" (UID: \"102d3534-5684-4468-9998-dfb590525263\") " Sep 9 00:21:06.484873 kubelet[2449]: I0909 00:21:06.484098 2449 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgrxr\" (UniqueName: \"kubernetes.io/projected/9ad3069b-e0f2-4278-95fc-cf229ad12f49-kube-api-access-qgrxr\") pod \"9ad3069b-e0f2-4278-95fc-cf229ad12f49\" (UID: \"9ad3069b-e0f2-4278-95fc-cf229ad12f49\") " Sep 9 00:21:06.484873 kubelet[2449]: I0909 00:21:06.484116 2449 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/102d3534-5684-4468-9998-dfb590525263-lib-modules\") pod \"102d3534-5684-4468-9998-dfb590525263\" (UID: \"102d3534-5684-4468-9998-dfb590525263\") " Sep 9 00:21:06.484873 kubelet[2449]: I0909 00:21:06.484131 2449 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/102d3534-5684-4468-9998-dfb590525263-host-proc-sys-kernel\") pod \"102d3534-5684-4468-9998-dfb590525263\" (UID: \"102d3534-5684-4468-9998-dfb590525263\") " Sep 9 00:21:06.484873 kubelet[2449]: I0909 00:21:06.484150 2449 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/102d3534-5684-4468-9998-dfb590525263-clustermesh-secrets\") pod \"102d3534-5684-4468-9998-dfb590525263\" (UID: \"102d3534-5684-4468-9998-dfb590525263\") " Sep 9 00:21:06.484873 kubelet[2449]: I0909 00:21:06.484165 2449 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/102d3534-5684-4468-9998-dfb590525263-hubble-tls\") pod \"102d3534-5684-4468-9998-dfb590525263\" (UID: \"102d3534-5684-4468-9998-dfb590525263\") " Sep 9 00:21:06.484873 kubelet[2449]: I0909 00:21:06.484202 2449 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/102d3534-5684-4468-9998-dfb590525263-cilium-run\") pod \"102d3534-5684-4468-9998-dfb590525263\" (UID: \"102d3534-5684-4468-9998-dfb590525263\") " Sep 9 00:21:06.485186 kubelet[2449]: I0909 00:21:06.484218 2449 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/102d3534-5684-4468-9998-dfb590525263-cni-path\") pod \"102d3534-5684-4468-9998-dfb590525263\" (UID: \"102d3534-5684-4468-9998-dfb590525263\") " Sep 9 00:21:06.485186 kubelet[2449]: I0909 00:21:06.484456 2449 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/102d3534-5684-4468-9998-dfb590525263-cni-path" (OuterVolumeSpecName: "cni-path") pod "102d3534-5684-4468-9998-dfb590525263" (UID: "102d3534-5684-4468-9998-dfb590525263"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:21:06.485186 kubelet[2449]: I0909 00:21:06.484457 2449 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/102d3534-5684-4468-9998-dfb590525263-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "102d3534-5684-4468-9998-dfb590525263" (UID: "102d3534-5684-4468-9998-dfb590525263"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:21:06.485186 kubelet[2449]: I0909 00:21:06.484496 2449 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/102d3534-5684-4468-9998-dfb590525263-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "102d3534-5684-4468-9998-dfb590525263" (UID: "102d3534-5684-4468-9998-dfb590525263"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:21:06.485186 kubelet[2449]: I0909 00:21:06.484511 2449 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/102d3534-5684-4468-9998-dfb590525263-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "102d3534-5684-4468-9998-dfb590525263" (UID: "102d3534-5684-4468-9998-dfb590525263"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:21:06.485312 kubelet[2449]: I0909 00:21:06.484525 2449 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/102d3534-5684-4468-9998-dfb590525263-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "102d3534-5684-4468-9998-dfb590525263" (UID: "102d3534-5684-4468-9998-dfb590525263"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:21:06.486614 kubelet[2449]: I0909 00:21:06.486577 2449 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ad3069b-e0f2-4278-95fc-cf229ad12f49-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9ad3069b-e0f2-4278-95fc-cf229ad12f49" (UID: "9ad3069b-e0f2-4278-95fc-cf229ad12f49"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 00:21:06.486699 kubelet[2449]: I0909 00:21:06.486637 2449 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/102d3534-5684-4468-9998-dfb590525263-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "102d3534-5684-4468-9998-dfb590525263" (UID: "102d3534-5684-4468-9998-dfb590525263"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:21:06.486699 kubelet[2449]: I0909 00:21:06.486655 2449 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/102d3534-5684-4468-9998-dfb590525263-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "102d3534-5684-4468-9998-dfb590525263" (UID: "102d3534-5684-4468-9998-dfb590525263"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:21:06.486699 kubelet[2449]: I0909 00:21:06.486681 2449 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/102d3534-5684-4468-9998-dfb590525263-hostproc" (OuterVolumeSpecName: "hostproc") pod "102d3534-5684-4468-9998-dfb590525263" (UID: "102d3534-5684-4468-9998-dfb590525263"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:21:06.486699 kubelet[2449]: I0909 00:21:06.486696 2449 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/102d3534-5684-4468-9998-dfb590525263-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "102d3534-5684-4468-9998-dfb590525263" (UID: "102d3534-5684-4468-9998-dfb590525263"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:21:06.487949 kubelet[2449]: I0909 00:21:06.487239 2449 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/102d3534-5684-4468-9998-dfb590525263-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "102d3534-5684-4468-9998-dfb590525263" (UID: "102d3534-5684-4468-9998-dfb590525263"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:21:06.487949 kubelet[2449]: I0909 00:21:06.487324 2449 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/102d3534-5684-4468-9998-dfb590525263-kube-api-access-bc245" (OuterVolumeSpecName: "kube-api-access-bc245") pod "102d3534-5684-4468-9998-dfb590525263" (UID: "102d3534-5684-4468-9998-dfb590525263"). InnerVolumeSpecName "kube-api-access-bc245". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 00:21:06.488681 kubelet[2449]: I0909 00:21:06.488651 2449 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/102d3534-5684-4468-9998-dfb590525263-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "102d3534-5684-4468-9998-dfb590525263" (UID: "102d3534-5684-4468-9998-dfb590525263"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 9 00:21:06.488737 kubelet[2449]: I0909 00:21:06.488701 2449 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ad3069b-e0f2-4278-95fc-cf229ad12f49-kube-api-access-qgrxr" (OuterVolumeSpecName: "kube-api-access-qgrxr") pod "9ad3069b-e0f2-4278-95fc-cf229ad12f49" (UID: "9ad3069b-e0f2-4278-95fc-cf229ad12f49"). InnerVolumeSpecName "kube-api-access-qgrxr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 00:21:06.489194 kubelet[2449]: I0909 00:21:06.489170 2449 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/102d3534-5684-4468-9998-dfb590525263-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "102d3534-5684-4468-9998-dfb590525263" (UID: "102d3534-5684-4468-9998-dfb590525263"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 00:21:06.489453 kubelet[2449]: I0909 00:21:06.489425 2449 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/102d3534-5684-4468-9998-dfb590525263-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "102d3534-5684-4468-9998-dfb590525263" (UID: "102d3534-5684-4468-9998-dfb590525263"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 00:21:06.584933 kubelet[2449]: I0909 00:21:06.584884 2449 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/102d3534-5684-4468-9998-dfb590525263-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 9 00:21:06.584933 kubelet[2449]: I0909 00:21:06.584923 2449 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bc245\" (UniqueName: \"kubernetes.io/projected/102d3534-5684-4468-9998-dfb590525263-kube-api-access-bc245\") on node \"localhost\" DevicePath \"\"" Sep 9 00:21:06.584933 kubelet[2449]: I0909 00:21:06.584938 2449 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/102d3534-5684-4468-9998-dfb590525263-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 9 00:21:06.584933 kubelet[2449]: I0909 00:21:06.584947 2449 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/102d3534-5684-4468-9998-dfb590525263-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 9 00:21:06.585153 kubelet[2449]: I0909 00:21:06.584955 2449 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/102d3534-5684-4468-9998-dfb590525263-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 9 00:21:06.585153 kubelet[2449]: I0909 00:21:06.584965 2449 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/102d3534-5684-4468-9998-dfb590525263-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 9 00:21:06.585153 kubelet[2449]: I0909 00:21:06.584973 2449 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9ad3069b-e0f2-4278-95fc-cf229ad12f49-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 9 00:21:06.585153 kubelet[2449]: I0909 00:21:06.584981 2449 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/102d3534-5684-4468-9998-dfb590525263-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 9 00:21:06.585153 kubelet[2449]: I0909 00:21:06.584988 2449 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/102d3534-5684-4468-9998-dfb590525263-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 9 00:21:06.585153 kubelet[2449]: I0909 00:21:06.584995 2449 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/102d3534-5684-4468-9998-dfb590525263-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 9 00:21:06.585153 kubelet[2449]: I0909 00:21:06.585002 2449 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgrxr\" (UniqueName: \"kubernetes.io/projected/9ad3069b-e0f2-4278-95fc-cf229ad12f49-kube-api-access-qgrxr\") on node \"localhost\" DevicePath \"\"" Sep 9 00:21:06.585153 kubelet[2449]: I0909 00:21:06.585010 2449 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/102d3534-5684-4468-9998-dfb590525263-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 9 00:21:06.585324 kubelet[2449]: I0909 00:21:06.585028 2449 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/102d3534-5684-4468-9998-dfb590525263-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 9 00:21:06.585324 kubelet[2449]: I0909 00:21:06.585035 2449 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/102d3534-5684-4468-9998-dfb590525263-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 9 00:21:06.585324 kubelet[2449]: I0909 00:21:06.585044 2449 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/102d3534-5684-4468-9998-dfb590525263-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 9 00:21:06.585324 kubelet[2449]: I0909 00:21:06.585052 2449 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/102d3534-5684-4468-9998-dfb590525263-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 9 00:21:06.774658 systemd[1]: Removed slice kubepods-besteffort-pod9ad3069b_e0f2_4278_95fc_cf229ad12f49.slice - libcontainer container kubepods-besteffort-pod9ad3069b_e0f2_4278_95fc_cf229ad12f49.slice. Sep 9 00:21:06.776999 systemd[1]: Removed slice kubepods-burstable-pod102d3534_5684_4468_9998_dfb590525263.slice - libcontainer container kubepods-burstable-pod102d3534_5684_4468_9998_dfb590525263.slice. Sep 9 00:21:06.777092 systemd[1]: kubepods-burstable-pod102d3534_5684_4468_9998_dfb590525263.slice: Consumed 6.311s CPU time. Sep 9 00:21:06.805811 kubelet[2449]: E0909 00:21:06.805771 2449 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 00:21:06.954079 kubelet[2449]: I0909 00:21:06.953834 2449 scope.go:117] "RemoveContainer" containerID="d58e19b2fdcf95bf616c13fb69e68044f928d1c958e58bce80eb874a104a6705" Sep 9 00:21:06.957147 containerd[1444]: time="2025-09-09T00:21:06.957001897Z" level=info msg="RemoveContainer for \"d58e19b2fdcf95bf616c13fb69e68044f928d1c958e58bce80eb874a104a6705\"" Sep 9 00:21:06.968387 containerd[1444]: time="2025-09-09T00:21:06.968329336Z" level=info msg="RemoveContainer for \"d58e19b2fdcf95bf616c13fb69e68044f928d1c958e58bce80eb874a104a6705\" returns successfully" Sep 9 00:21:06.968747 kubelet[2449]: I0909 00:21:06.968575 2449 scope.go:117] "RemoveContainer" containerID="d58e19b2fdcf95bf616c13fb69e68044f928d1c958e58bce80eb874a104a6705" Sep 9 00:21:06.968893 containerd[1444]: time="2025-09-09T00:21:06.968838743Z" level=error msg="ContainerStatus for \"d58e19b2fdcf95bf616c13fb69e68044f928d1c958e58bce80eb874a104a6705\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d58e19b2fdcf95bf616c13fb69e68044f928d1c958e58bce80eb874a104a6705\": not found" Sep 9 00:21:06.980085 kubelet[2449]: E0909 00:21:06.980050 2449 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d58e19b2fdcf95bf616c13fb69e68044f928d1c958e58bce80eb874a104a6705\": not found" containerID="d58e19b2fdcf95bf616c13fb69e68044f928d1c958e58bce80eb874a104a6705" Sep 9 00:21:06.988452 kubelet[2449]: I0909 00:21:06.988201 2449 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d58e19b2fdcf95bf616c13fb69e68044f928d1c958e58bce80eb874a104a6705"} err="failed to get container status \"d58e19b2fdcf95bf616c13fb69e68044f928d1c958e58bce80eb874a104a6705\": rpc error: code = NotFound desc = an error occurred when try to find container \"d58e19b2fdcf95bf616c13fb69e68044f928d1c958e58bce80eb874a104a6705\": not found" Sep 9 00:21:06.988452 kubelet[2449]: I0909 00:21:06.988403 2449 scope.go:117] "RemoveContainer" containerID="868bf30759af2df05400d5daf9bcf03ae2400eaab114d304f11fde4eb79f2f0c" Sep 9 00:21:06.994293 containerd[1444]: time="2025-09-09T00:21:06.993415447Z" level=info msg="RemoveContainer for \"868bf30759af2df05400d5daf9bcf03ae2400eaab114d304f11fde4eb79f2f0c\"" Sep 9 00:21:07.001153 containerd[1444]: time="2025-09-09T00:21:07.001077195Z" level=info msg="RemoveContainer for \"868bf30759af2df05400d5daf9bcf03ae2400eaab114d304f11fde4eb79f2f0c\" returns successfully" Sep 9 00:21:07.001357 kubelet[2449]: I0909 00:21:07.001324 2449 scope.go:117] "RemoveContainer" containerID="090629808dd10af0ab367f36111bf7ea057975920cd96d94ff7c5a3c476bea71" Sep 9 00:21:07.003287 containerd[1444]: time="2025-09-09T00:21:07.003130063Z" level=info msg="RemoveContainer for \"090629808dd10af0ab367f36111bf7ea057975920cd96d94ff7c5a3c476bea71\"" Sep 9 00:21:07.019488 containerd[1444]: time="2025-09-09T00:21:07.019444927Z" level=info msg="RemoveContainer for \"090629808dd10af0ab367f36111bf7ea057975920cd96d94ff7c5a3c476bea71\" returns successfully" Sep 9 00:21:07.019901 kubelet[2449]: I0909 00:21:07.019872 2449 scope.go:117] "RemoveContainer" containerID="9ffc67c52f9487fcd3d7bb61641f64e63aedbf96d38d338eb6bd71798b9dc40c" Sep 9 00:21:07.021003 containerd[1444]: time="2025-09-09T00:21:07.020931067Z" level=info msg="RemoveContainer for \"9ffc67c52f9487fcd3d7bb61641f64e63aedbf96d38d338eb6bd71798b9dc40c\"" Sep 9 00:21:07.023390 containerd[1444]: time="2025-09-09T00:21:07.023349500Z" level=info msg="RemoveContainer for \"9ffc67c52f9487fcd3d7bb61641f64e63aedbf96d38d338eb6bd71798b9dc40c\" returns successfully" Sep 9 00:21:07.023736 kubelet[2449]: I0909 00:21:07.023710 2449 scope.go:117] "RemoveContainer" containerID="81e43b2a9bf7a70f7a71df8ad6f358d17c3c88361f73a8a5d02d72cb95e488ce" Sep 9 00:21:07.025724 containerd[1444]: time="2025-09-09T00:21:07.024903242Z" level=info msg="RemoveContainer for \"81e43b2a9bf7a70f7a71df8ad6f358d17c3c88361f73a8a5d02d72cb95e488ce\"" Sep 9 00:21:07.029764 containerd[1444]: time="2025-09-09T00:21:07.029734428Z" level=info msg="RemoveContainer for \"81e43b2a9bf7a70f7a71df8ad6f358d17c3c88361f73a8a5d02d72cb95e488ce\" returns successfully" Sep 9 00:21:07.030423 kubelet[2449]: I0909 00:21:07.030273 2449 scope.go:117] "RemoveContainer" containerID="a0a833cc69a1cde7c5f048b53d471e0d2c97ee9327b14c6d6d5d19d07323ea9b" Sep 9 00:21:07.032684 containerd[1444]: time="2025-09-09T00:21:07.032649548Z" level=info msg="RemoveContainer for \"a0a833cc69a1cde7c5f048b53d471e0d2c97ee9327b14c6d6d5d19d07323ea9b\"" Sep 9 00:21:07.037828 containerd[1444]: time="2025-09-09T00:21:07.037780778Z" level=info msg="RemoveContainer for \"a0a833cc69a1cde7c5f048b53d471e0d2c97ee9327b14c6d6d5d19d07323ea9b\" returns successfully" Sep 9 00:21:07.038041 kubelet[2449]: I0909 00:21:07.038010 2449 scope.go:117] "RemoveContainer" containerID="868bf30759af2df05400d5daf9bcf03ae2400eaab114d304f11fde4eb79f2f0c" Sep 9 00:21:07.038325 containerd[1444]: time="2025-09-09T00:21:07.038251545Z" level=error msg="ContainerStatus for \"868bf30759af2df05400d5daf9bcf03ae2400eaab114d304f11fde4eb79f2f0c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"868bf30759af2df05400d5daf9bcf03ae2400eaab114d304f11fde4eb79f2f0c\": not found" Sep 9 00:21:07.038389 kubelet[2449]: E0909 00:21:07.038363 2449 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"868bf30759af2df05400d5daf9bcf03ae2400eaab114d304f11fde4eb79f2f0c\": not found" containerID="868bf30759af2df05400d5daf9bcf03ae2400eaab114d304f11fde4eb79f2f0c" Sep 9 00:21:07.038426 kubelet[2449]: I0909 00:21:07.038392 2449 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"868bf30759af2df05400d5daf9bcf03ae2400eaab114d304f11fde4eb79f2f0c"} err="failed to get container status \"868bf30759af2df05400d5daf9bcf03ae2400eaab114d304f11fde4eb79f2f0c\": rpc error: code = NotFound desc = an error occurred when try to find container \"868bf30759af2df05400d5daf9bcf03ae2400eaab114d304f11fde4eb79f2f0c\": not found" Sep 9 00:21:07.038426 kubelet[2449]: I0909 00:21:07.038415 2449 scope.go:117] "RemoveContainer" containerID="090629808dd10af0ab367f36111bf7ea057975920cd96d94ff7c5a3c476bea71" Sep 9 00:21:07.038674 containerd[1444]: time="2025-09-09T00:21:07.038638630Z" level=error msg="ContainerStatus for \"090629808dd10af0ab367f36111bf7ea057975920cd96d94ff7c5a3c476bea71\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"090629808dd10af0ab367f36111bf7ea057975920cd96d94ff7c5a3c476bea71\": not found" Sep 9 00:21:07.038770 kubelet[2449]: E0909 00:21:07.038749 2449 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"090629808dd10af0ab367f36111bf7ea057975920cd96d94ff7c5a3c476bea71\": not found" containerID="090629808dd10af0ab367f36111bf7ea057975920cd96d94ff7c5a3c476bea71" Sep 9 00:21:07.038815 kubelet[2449]: I0909 00:21:07.038798 2449 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"090629808dd10af0ab367f36111bf7ea057975920cd96d94ff7c5a3c476bea71"} err="failed to get container status \"090629808dd10af0ab367f36111bf7ea057975920cd96d94ff7c5a3c476bea71\": rpc error: code = NotFound desc = an error occurred when try to find container \"090629808dd10af0ab367f36111bf7ea057975920cd96d94ff7c5a3c476bea71\": not found" Sep 9 00:21:07.038843 kubelet[2449]: I0909 00:21:07.038816 2449 scope.go:117] "RemoveContainer" containerID="9ffc67c52f9487fcd3d7bb61641f64e63aedbf96d38d338eb6bd71798b9dc40c" Sep 9 00:21:07.039065 containerd[1444]: time="2025-09-09T00:21:07.039034556Z" level=error msg="ContainerStatus for \"9ffc67c52f9487fcd3d7bb61641f64e63aedbf96d38d338eb6bd71798b9dc40c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9ffc67c52f9487fcd3d7bb61641f64e63aedbf96d38d338eb6bd71798b9dc40c\": not found" Sep 9 00:21:07.039165 kubelet[2449]: E0909 00:21:07.039142 2449 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9ffc67c52f9487fcd3d7bb61641f64e63aedbf96d38d338eb6bd71798b9dc40c\": not found" containerID="9ffc67c52f9487fcd3d7bb61641f64e63aedbf96d38d338eb6bd71798b9dc40c" Sep 9 00:21:07.039197 kubelet[2449]: I0909 00:21:07.039167 2449 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9ffc67c52f9487fcd3d7bb61641f64e63aedbf96d38d338eb6bd71798b9dc40c"} err="failed to get container status \"9ffc67c52f9487fcd3d7bb61641f64e63aedbf96d38d338eb6bd71798b9dc40c\": rpc error: code = NotFound desc = an error occurred when try to find container \"9ffc67c52f9487fcd3d7bb61641f64e63aedbf96d38d338eb6bd71798b9dc40c\": not found" Sep 9 00:21:07.039197 kubelet[2449]: I0909 00:21:07.039181 2449 scope.go:117] "RemoveContainer" containerID="81e43b2a9bf7a70f7a71df8ad6f358d17c3c88361f73a8a5d02d72cb95e488ce" Sep 9 00:21:07.039331 containerd[1444]: time="2025-09-09T00:21:07.039305479Z" level=error msg="ContainerStatus for \"81e43b2a9bf7a70f7a71df8ad6f358d17c3c88361f73a8a5d02d72cb95e488ce\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"81e43b2a9bf7a70f7a71df8ad6f358d17c3c88361f73a8a5d02d72cb95e488ce\": not found" Sep 9 00:21:07.039493 kubelet[2449]: E0909 00:21:07.039427 2449 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"81e43b2a9bf7a70f7a71df8ad6f358d17c3c88361f73a8a5d02d72cb95e488ce\": not found" containerID="81e43b2a9bf7a70f7a71df8ad6f358d17c3c88361f73a8a5d02d72cb95e488ce" Sep 9 00:21:07.039493 kubelet[2449]: I0909 00:21:07.039459 2449 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"81e43b2a9bf7a70f7a71df8ad6f358d17c3c88361f73a8a5d02d72cb95e488ce"} err="failed to get container status \"81e43b2a9bf7a70f7a71df8ad6f358d17c3c88361f73a8a5d02d72cb95e488ce\": rpc error: code = NotFound desc = an error occurred when try to find container \"81e43b2a9bf7a70f7a71df8ad6f358d17c3c88361f73a8a5d02d72cb95e488ce\": not found" Sep 9 00:21:07.039493 kubelet[2449]: I0909 00:21:07.039479 2449 scope.go:117] "RemoveContainer" containerID="a0a833cc69a1cde7c5f048b53d471e0d2c97ee9327b14c6d6d5d19d07323ea9b" Sep 9 00:21:07.039756 containerd[1444]: time="2025-09-09T00:21:07.039691565Z" level=error msg="ContainerStatus for \"a0a833cc69a1cde7c5f048b53d471e0d2c97ee9327b14c6d6d5d19d07323ea9b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a0a833cc69a1cde7c5f048b53d471e0d2c97ee9327b14c6d6d5d19d07323ea9b\": not found" Sep 9 00:21:07.039813 kubelet[2449]: E0909 00:21:07.039795 2449 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a0a833cc69a1cde7c5f048b53d471e0d2c97ee9327b14c6d6d5d19d07323ea9b\": not found" containerID="a0a833cc69a1cde7c5f048b53d471e0d2c97ee9327b14c6d6d5d19d07323ea9b" Sep 9 00:21:07.039848 kubelet[2449]: I0909 00:21:07.039813 2449 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a0a833cc69a1cde7c5f048b53d471e0d2c97ee9327b14c6d6d5d19d07323ea9b"} err="failed to get container status \"a0a833cc69a1cde7c5f048b53d471e0d2c97ee9327b14c6d6d5d19d07323ea9b\": rpc error: code = NotFound desc = an error occurred when try to find container \"a0a833cc69a1cde7c5f048b53d471e0d2c97ee9327b14c6d6d5d19d07323ea9b\": not found" Sep 9 00:21:07.089049 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c394b3067b514f8c48bca38f3fda7400d7eba2f000df6b856b45385ff9432e2e-rootfs.mount: Deactivated successfully. Sep 9 00:21:07.089149 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c521500aac2f488688ff3b2392aab2f93b4a718b655f7b184e8e0ed07804a52-rootfs.mount: Deactivated successfully. Sep 9 00:21:07.089199 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4c521500aac2f488688ff3b2392aab2f93b4a718b655f7b184e8e0ed07804a52-shm.mount: Deactivated successfully. Sep 9 00:21:07.089254 systemd[1]: var-lib-kubelet-pods-9ad3069b\x2de0f2\x2d4278\x2d95fc\x2dcf229ad12f49-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqgrxr.mount: Deactivated successfully. Sep 9 00:21:07.089308 systemd[1]: var-lib-kubelet-pods-102d3534\x2d5684\x2d4468\x2d9998\x2ddfb590525263-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbc245.mount: Deactivated successfully. Sep 9 00:21:07.089355 systemd[1]: var-lib-kubelet-pods-102d3534\x2d5684\x2d4468\x2d9998\x2ddfb590525263-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 9 00:21:07.089401 systemd[1]: var-lib-kubelet-pods-102d3534\x2d5684\x2d4468\x2d9998\x2ddfb590525263-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 9 00:21:07.932486 sshd[4108]: pam_unix(sshd:session): session closed for user core Sep 9 00:21:07.944874 systemd[1]: sshd@23-10.0.0.53:22-10.0.0.1:39700.service: Deactivated successfully. Sep 9 00:21:07.947909 systemd[1]: session-24.scope: Deactivated successfully. Sep 9 00:21:07.948268 systemd[1]: session-24.scope: Consumed 1.794s CPU time. Sep 9 00:21:07.950050 systemd-logind[1422]: Session 24 logged out. Waiting for processes to exit. Sep 9 00:21:07.960954 systemd[1]: Started sshd@24-10.0.0.53:22-10.0.0.1:39702.service - OpenSSH per-connection server daemon (10.0.0.1:39702). Sep 9 00:21:07.962759 systemd-logind[1422]: Removed session 24. Sep 9 00:21:08.006366 sshd[4269]: Accepted publickey for core from 10.0.0.1 port 39702 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:21:08.007978 sshd[4269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:21:08.014416 systemd-logind[1422]: New session 25 of user core. Sep 9 00:21:08.025995 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 9 00:21:08.754098 kubelet[2449]: I0909 00:21:08.754058 2449 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="102d3534-5684-4468-9998-dfb590525263" path="/var/lib/kubelet/pods/102d3534-5684-4468-9998-dfb590525263/volumes" Sep 9 00:21:08.756076 kubelet[2449]: I0909 00:21:08.754637 2449 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ad3069b-e0f2-4278-95fc-cf229ad12f49" path="/var/lib/kubelet/pods/9ad3069b-e0f2-4278-95fc-cf229ad12f49/volumes" Sep 9 00:21:08.830159 kubelet[2449]: I0909 00:21:08.829906 2449 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-09T00:21:08Z","lastTransitionTime":"2025-09-09T00:21:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 9 00:21:09.335781 sshd[4269]: pam_unix(sshd:session): session closed for user core Sep 9 00:21:09.347348 systemd[1]: sshd@24-10.0.0.53:22-10.0.0.1:39702.service: Deactivated successfully. Sep 9 00:21:09.349675 systemd[1]: session-25.scope: Deactivated successfully. Sep 9 00:21:09.351679 systemd[1]: session-25.scope: Consumed 1.148s CPU time. Sep 9 00:21:09.353100 systemd-logind[1422]: Session 25 logged out. Waiting for processes to exit. Sep 9 00:21:09.359895 systemd[1]: Started sshd@25-10.0.0.53:22-10.0.0.1:39714.service - OpenSSH per-connection server daemon (10.0.0.1:39714). Sep 9 00:21:09.365253 systemd-logind[1422]: Removed session 25. Sep 9 00:21:09.369211 kubelet[2449]: I0909 00:21:09.369159 2449 memory_manager.go:355] "RemoveStaleState removing state" podUID="9ad3069b-e0f2-4278-95fc-cf229ad12f49" containerName="cilium-operator" Sep 9 00:21:09.369211 kubelet[2449]: I0909 00:21:09.369191 2449 memory_manager.go:355] "RemoveStaleState removing state" podUID="102d3534-5684-4468-9998-dfb590525263" containerName="cilium-agent" Sep 9 00:21:09.413517 systemd[1]: Created slice kubepods-burstable-pod9b1d5e59_e88a_445f_8562_2af1b96999a7.slice - libcontainer container kubepods-burstable-pod9b1d5e59_e88a_445f_8562_2af1b96999a7.slice. Sep 9 00:21:09.424372 sshd[4282]: Accepted publickey for core from 10.0.0.1 port 39714 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:21:09.427057 sshd[4282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:21:09.436554 systemd-logind[1422]: New session 26 of user core. Sep 9 00:21:09.440741 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 9 00:21:09.493863 sshd[4282]: pam_unix(sshd:session): session closed for user core Sep 9 00:21:09.503664 systemd[1]: sshd@25-10.0.0.53:22-10.0.0.1:39714.service: Deactivated successfully. Sep 9 00:21:09.504807 kubelet[2449]: I0909 00:21:09.503759 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9b1d5e59-e88a-445f-8562-2af1b96999a7-cilium-run\") pod \"cilium-swcv6\" (UID: \"9b1d5e59-e88a-445f-8562-2af1b96999a7\") " pod="kube-system/cilium-swcv6" Sep 9 00:21:09.504807 kubelet[2449]: I0909 00:21:09.503792 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9b1d5e59-e88a-445f-8562-2af1b96999a7-hostproc\") pod \"cilium-swcv6\" (UID: \"9b1d5e59-e88a-445f-8562-2af1b96999a7\") " pod="kube-system/cilium-swcv6" Sep 9 00:21:09.504807 kubelet[2449]: I0909 00:21:09.503807 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b1d5e59-e88a-445f-8562-2af1b96999a7-lib-modules\") pod \"cilium-swcv6\" (UID: \"9b1d5e59-e88a-445f-8562-2af1b96999a7\") " pod="kube-system/cilium-swcv6" Sep 9 00:21:09.504807 kubelet[2449]: I0909 00:21:09.503824 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9b1d5e59-e88a-445f-8562-2af1b96999a7-host-proc-sys-net\") pod \"cilium-swcv6\" (UID: \"9b1d5e59-e88a-445f-8562-2af1b96999a7\") " pod="kube-system/cilium-swcv6" Sep 9 00:21:09.504807 kubelet[2449]: I0909 00:21:09.503839 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9b1d5e59-e88a-445f-8562-2af1b96999a7-etc-cni-netd\") pod \"cilium-swcv6\" (UID: \"9b1d5e59-e88a-445f-8562-2af1b96999a7\") " pod="kube-system/cilium-swcv6" Sep 9 00:21:09.504807 kubelet[2449]: I0909 00:21:09.503855 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvfmt\" (UniqueName: \"kubernetes.io/projected/9b1d5e59-e88a-445f-8562-2af1b96999a7-kube-api-access-jvfmt\") pod \"cilium-swcv6\" (UID: \"9b1d5e59-e88a-445f-8562-2af1b96999a7\") " pod="kube-system/cilium-swcv6" Sep 9 00:21:09.504994 kubelet[2449]: I0909 00:21:09.503873 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9b1d5e59-e88a-445f-8562-2af1b96999a7-bpf-maps\") pod \"cilium-swcv6\" (UID: \"9b1d5e59-e88a-445f-8562-2af1b96999a7\") " pod="kube-system/cilium-swcv6" Sep 9 00:21:09.504994 kubelet[2449]: I0909 00:21:09.503887 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9b1d5e59-e88a-445f-8562-2af1b96999a7-cilium-cgroup\") pod \"cilium-swcv6\" (UID: \"9b1d5e59-e88a-445f-8562-2af1b96999a7\") " pod="kube-system/cilium-swcv6" Sep 9 00:21:09.504994 kubelet[2449]: I0909 00:21:09.503901 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b1d5e59-e88a-445f-8562-2af1b96999a7-xtables-lock\") pod \"cilium-swcv6\" (UID: \"9b1d5e59-e88a-445f-8562-2af1b96999a7\") " pod="kube-system/cilium-swcv6" Sep 9 00:21:09.504994 kubelet[2449]: I0909 00:21:09.503916 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9b1d5e59-e88a-445f-8562-2af1b96999a7-cni-path\") pod \"cilium-swcv6\" (UID: \"9b1d5e59-e88a-445f-8562-2af1b96999a7\") " pod="kube-system/cilium-swcv6" Sep 9 00:21:09.504994 kubelet[2449]: I0909 00:21:09.503931 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9b1d5e59-e88a-445f-8562-2af1b96999a7-cilium-ipsec-secrets\") pod \"cilium-swcv6\" (UID: \"9b1d5e59-e88a-445f-8562-2af1b96999a7\") " pod="kube-system/cilium-swcv6" Sep 9 00:21:09.504994 kubelet[2449]: I0909 00:21:09.503947 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9b1d5e59-e88a-445f-8562-2af1b96999a7-host-proc-sys-kernel\") pod \"cilium-swcv6\" (UID: \"9b1d5e59-e88a-445f-8562-2af1b96999a7\") " pod="kube-system/cilium-swcv6" Sep 9 00:21:09.505126 kubelet[2449]: I0909 00:21:09.503964 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9b1d5e59-e88a-445f-8562-2af1b96999a7-clustermesh-secrets\") pod \"cilium-swcv6\" (UID: \"9b1d5e59-e88a-445f-8562-2af1b96999a7\") " pod="kube-system/cilium-swcv6" Sep 9 00:21:09.505126 kubelet[2449]: I0909 00:21:09.503978 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9b1d5e59-e88a-445f-8562-2af1b96999a7-cilium-config-path\") pod \"cilium-swcv6\" (UID: \"9b1d5e59-e88a-445f-8562-2af1b96999a7\") " pod="kube-system/cilium-swcv6" Sep 9 00:21:09.505126 kubelet[2449]: I0909 00:21:09.503997 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9b1d5e59-e88a-445f-8562-2af1b96999a7-hubble-tls\") pod \"cilium-swcv6\" (UID: \"9b1d5e59-e88a-445f-8562-2af1b96999a7\") " pod="kube-system/cilium-swcv6" Sep 9 00:21:09.505332 systemd[1]: session-26.scope: Deactivated successfully. Sep 9 00:21:09.506706 systemd-logind[1422]: Session 26 logged out. Waiting for processes to exit. Sep 9 00:21:09.516356 systemd[1]: Started sshd@26-10.0.0.53:22-10.0.0.1:39720.service - OpenSSH per-connection server daemon (10.0.0.1:39720). Sep 9 00:21:09.517174 systemd-logind[1422]: Removed session 26. Sep 9 00:21:09.546222 sshd[4290]: Accepted publickey for core from 10.0.0.1 port 39720 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:21:09.546753 sshd[4290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:21:09.550197 systemd-logind[1422]: New session 27 of user core. Sep 9 00:21:09.559746 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 9 00:21:09.717348 kubelet[2449]: E0909 00:21:09.717067 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:09.717641 containerd[1444]: time="2025-09-09T00:21:09.717553536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-swcv6,Uid:9b1d5e59-e88a-445f-8562-2af1b96999a7,Namespace:kube-system,Attempt:0,}" Sep 9 00:21:09.738113 containerd[1444]: time="2025-09-09T00:21:09.737968924Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:21:09.738113 containerd[1444]: time="2025-09-09T00:21:09.738026525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:21:09.738113 containerd[1444]: time="2025-09-09T00:21:09.738050726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:21:09.738297 containerd[1444]: time="2025-09-09T00:21:09.738121406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:21:09.755822 systemd[1]: Started cri-containerd-cbc2a4ce3624d0a4fcf2dfcae92e8159444d9bacd029f4b1236b659605f1eaf8.scope - libcontainer container cbc2a4ce3624d0a4fcf2dfcae92e8159444d9bacd029f4b1236b659605f1eaf8. Sep 9 00:21:09.777548 containerd[1444]: time="2025-09-09T00:21:09.777509244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-swcv6,Uid:9b1d5e59-e88a-445f-8562-2af1b96999a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"cbc2a4ce3624d0a4fcf2dfcae92e8159444d9bacd029f4b1236b659605f1eaf8\"" Sep 9 00:21:09.778536 kubelet[2449]: E0909 00:21:09.778514 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:09.781853 containerd[1444]: time="2025-09-09T00:21:09.781817780Z" level=info msg="CreateContainer within sandbox \"cbc2a4ce3624d0a4fcf2dfcae92e8159444d9bacd029f4b1236b659605f1eaf8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 00:21:09.794416 containerd[1444]: time="2025-09-09T00:21:09.794372945Z" level=info msg="CreateContainer within sandbox \"cbc2a4ce3624d0a4fcf2dfcae92e8159444d9bacd029f4b1236b659605f1eaf8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7b6a6d34aee343a84a5c344a7c19ca836f2ec500dd61b4ec4f038483670eab87\"" Sep 9 00:21:09.795683 containerd[1444]: time="2025-09-09T00:21:09.795057234Z" level=info msg="StartContainer for \"7b6a6d34aee343a84a5c344a7c19ca836f2ec500dd61b4ec4f038483670eab87\"" Sep 9 00:21:09.817790 systemd[1]: Started cri-containerd-7b6a6d34aee343a84a5c344a7c19ca836f2ec500dd61b4ec4f038483670eab87.scope - libcontainer container 7b6a6d34aee343a84a5c344a7c19ca836f2ec500dd61b4ec4f038483670eab87. Sep 9 00:21:09.842942 containerd[1444]: time="2025-09-09T00:21:09.842829261Z" level=info msg="StartContainer for \"7b6a6d34aee343a84a5c344a7c19ca836f2ec500dd61b4ec4f038483670eab87\" returns successfully" Sep 9 00:21:09.853458 systemd[1]: cri-containerd-7b6a6d34aee343a84a5c344a7c19ca836f2ec500dd61b4ec4f038483670eab87.scope: Deactivated successfully. Sep 9 00:21:09.883751 containerd[1444]: time="2025-09-09T00:21:09.883693278Z" level=info msg="shim disconnected" id=7b6a6d34aee343a84a5c344a7c19ca836f2ec500dd61b4ec4f038483670eab87 namespace=k8s.io Sep 9 00:21:09.883751 containerd[1444]: time="2025-09-09T00:21:09.883746279Z" level=warning msg="cleaning up after shim disconnected" id=7b6a6d34aee343a84a5c344a7c19ca836f2ec500dd61b4ec4f038483670eab87 namespace=k8s.io Sep 9 00:21:09.883751 containerd[1444]: time="2025-09-09T00:21:09.883754999Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:21:09.976695 kubelet[2449]: E0909 00:21:09.976527 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:09.980227 containerd[1444]: time="2025-09-09T00:21:09.978534083Z" level=info msg="CreateContainer within sandbox \"cbc2a4ce3624d0a4fcf2dfcae92e8159444d9bacd029f4b1236b659605f1eaf8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 00:21:09.999029 containerd[1444]: time="2025-09-09T00:21:09.998967192Z" level=info msg="CreateContainer within sandbox \"cbc2a4ce3624d0a4fcf2dfcae92e8159444d9bacd029f4b1236b659605f1eaf8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d4374384fe73e23eb64f90ee44b423a1581be022dbfe9adc798cd252c621a39d\"" Sep 9 00:21:09.999460 containerd[1444]: time="2025-09-09T00:21:09.999433678Z" level=info msg="StartContainer for \"d4374384fe73e23eb64f90ee44b423a1581be022dbfe9adc798cd252c621a39d\"" Sep 9 00:21:10.023742 systemd[1]: Started cri-containerd-d4374384fe73e23eb64f90ee44b423a1581be022dbfe9adc798cd252c621a39d.scope - libcontainer container d4374384fe73e23eb64f90ee44b423a1581be022dbfe9adc798cd252c621a39d. Sep 9 00:21:10.048008 containerd[1444]: time="2025-09-09T00:21:10.047559257Z" level=info msg="StartContainer for \"d4374384fe73e23eb64f90ee44b423a1581be022dbfe9adc798cd252c621a39d\" returns successfully" Sep 9 00:21:10.052905 systemd[1]: cri-containerd-d4374384fe73e23eb64f90ee44b423a1581be022dbfe9adc798cd252c621a39d.scope: Deactivated successfully. Sep 9 00:21:10.079645 containerd[1444]: time="2025-09-09T00:21:10.079578468Z" level=info msg="shim disconnected" id=d4374384fe73e23eb64f90ee44b423a1581be022dbfe9adc798cd252c621a39d namespace=k8s.io Sep 9 00:21:10.079645 containerd[1444]: time="2025-09-09T00:21:10.079641309Z" level=warning msg="cleaning up after shim disconnected" id=d4374384fe73e23eb64f90ee44b423a1581be022dbfe9adc798cd252c621a39d namespace=k8s.io Sep 9 00:21:10.079645 containerd[1444]: time="2025-09-09T00:21:10.079650669Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:21:10.753055 kubelet[2449]: E0909 00:21:10.753022 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:10.981712 kubelet[2449]: E0909 00:21:10.980377 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:10.984852 containerd[1444]: time="2025-09-09T00:21:10.984026012Z" level=info msg="CreateContainer within sandbox \"cbc2a4ce3624d0a4fcf2dfcae92e8159444d9bacd029f4b1236b659605f1eaf8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 00:21:11.016430 containerd[1444]: time="2025-09-09T00:21:11.016187822Z" level=info msg="CreateContainer within sandbox \"cbc2a4ce3624d0a4fcf2dfcae92e8159444d9bacd029f4b1236b659605f1eaf8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3e4984f0af69c22d382b5645bbf6e14f701091172a9c46fe5f2f11901744ee7a\"" Sep 9 00:21:11.020187 containerd[1444]: time="2025-09-09T00:21:11.017766842Z" level=info msg="StartContainer for \"3e4984f0af69c22d382b5645bbf6e14f701091172a9c46fe5f2f11901744ee7a\"" Sep 9 00:21:11.054530 systemd[1]: run-containerd-runc-k8s.io-3e4984f0af69c22d382b5645bbf6e14f701091172a9c46fe5f2f11901744ee7a-runc.7KGksd.mount: Deactivated successfully. Sep 9 00:21:11.066792 systemd[1]: Started cri-containerd-3e4984f0af69c22d382b5645bbf6e14f701091172a9c46fe5f2f11901744ee7a.scope - libcontainer container 3e4984f0af69c22d382b5645bbf6e14f701091172a9c46fe5f2f11901744ee7a. Sep 9 00:21:11.100948 systemd[1]: cri-containerd-3e4984f0af69c22d382b5645bbf6e14f701091172a9c46fe5f2f11901744ee7a.scope: Deactivated successfully. Sep 9 00:21:11.102528 containerd[1444]: time="2025-09-09T00:21:11.102066622Z" level=info msg="StartContainer for \"3e4984f0af69c22d382b5645bbf6e14f701091172a9c46fe5f2f11901744ee7a\" returns successfully" Sep 9 00:21:11.133302 containerd[1444]: time="2025-09-09T00:21:11.133217494Z" level=info msg="shim disconnected" id=3e4984f0af69c22d382b5645bbf6e14f701091172a9c46fe5f2f11901744ee7a namespace=k8s.io Sep 9 00:21:11.133302 containerd[1444]: time="2025-09-09T00:21:11.133272215Z" level=warning msg="cleaning up after shim disconnected" id=3e4984f0af69c22d382b5645bbf6e14f701091172a9c46fe5f2f11901744ee7a namespace=k8s.io Sep 9 00:21:11.133302 containerd[1444]: time="2025-09-09T00:21:11.133280655Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:21:11.611137 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e4984f0af69c22d382b5645bbf6e14f701091172a9c46fe5f2f11901744ee7a-rootfs.mount: Deactivated successfully. Sep 9 00:21:11.807627 kubelet[2449]: E0909 00:21:11.807556 2449 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 00:21:11.986232 kubelet[2449]: E0909 00:21:11.985682 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:11.988584 containerd[1444]: time="2025-09-09T00:21:11.987709486Z" level=info msg="CreateContainer within sandbox \"cbc2a4ce3624d0a4fcf2dfcae92e8159444d9bacd029f4b1236b659605f1eaf8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 00:21:12.003517 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3425723000.mount: Deactivated successfully. Sep 9 00:21:12.012363 containerd[1444]: time="2025-09-09T00:21:12.012311193Z" level=info msg="CreateContainer within sandbox \"cbc2a4ce3624d0a4fcf2dfcae92e8159444d9bacd029f4b1236b659605f1eaf8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3e155b858dc68e63f811ef8b9bb8ff9501751d3902855949efd950558f4c19ce\"" Sep 9 00:21:12.013786 containerd[1444]: time="2025-09-09T00:21:12.013125163Z" level=info msg="StartContainer for \"3e155b858dc68e63f811ef8b9bb8ff9501751d3902855949efd950558f4c19ce\"" Sep 9 00:21:12.042788 systemd[1]: Started cri-containerd-3e155b858dc68e63f811ef8b9bb8ff9501751d3902855949efd950558f4c19ce.scope - libcontainer container 3e155b858dc68e63f811ef8b9bb8ff9501751d3902855949efd950558f4c19ce. Sep 9 00:21:12.063927 systemd[1]: cri-containerd-3e155b858dc68e63f811ef8b9bb8ff9501751d3902855949efd950558f4c19ce.scope: Deactivated successfully. Sep 9 00:21:12.083831 containerd[1444]: time="2025-09-09T00:21:12.083781073Z" level=info msg="StartContainer for \"3e155b858dc68e63f811ef8b9bb8ff9501751d3902855949efd950558f4c19ce\" returns successfully" Sep 9 00:21:12.096235 containerd[1444]: time="2025-09-09T00:21:12.081425084Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b1d5e59_e88a_445f_8562_2af1b96999a7.slice/cri-containerd-3e155b858dc68e63f811ef8b9bb8ff9501751d3902855949efd950558f4c19ce.scope/memory.events\": no such file or directory" Sep 9 00:21:12.113918 containerd[1444]: time="2025-09-09T00:21:12.113845684Z" level=info msg="shim disconnected" id=3e155b858dc68e63f811ef8b9bb8ff9501751d3902855949efd950558f4c19ce namespace=k8s.io Sep 9 00:21:12.113918 containerd[1444]: time="2025-09-09T00:21:12.113900924Z" level=warning msg="cleaning up after shim disconnected" id=3e155b858dc68e63f811ef8b9bb8ff9501751d3902855949efd950558f4c19ce namespace=k8s.io Sep 9 00:21:12.113918 containerd[1444]: time="2025-09-09T00:21:12.113910684Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:21:12.611012 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e155b858dc68e63f811ef8b9bb8ff9501751d3902855949efd950558f4c19ce-rootfs.mount: Deactivated successfully. Sep 9 00:21:12.991818 kubelet[2449]: E0909 00:21:12.991689 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:12.994585 containerd[1444]: time="2025-09-09T00:21:12.994535694Z" level=info msg="CreateContainer within sandbox \"cbc2a4ce3624d0a4fcf2dfcae92e8159444d9bacd029f4b1236b659605f1eaf8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 00:21:13.018758 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount931431465.mount: Deactivated successfully. Sep 9 00:21:13.027205 containerd[1444]: time="2025-09-09T00:21:13.027161850Z" level=info msg="CreateContainer within sandbox \"cbc2a4ce3624d0a4fcf2dfcae92e8159444d9bacd029f4b1236b659605f1eaf8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"95a86231bc2f51214210eee46de0272ddddad0679fded18b49044b6d0ba5a082\"" Sep 9 00:21:13.028694 containerd[1444]: time="2025-09-09T00:21:13.028658988Z" level=info msg="StartContainer for \"95a86231bc2f51214210eee46de0272ddddad0679fded18b49044b6d0ba5a082\"" Sep 9 00:21:13.063756 systemd[1]: Started cri-containerd-95a86231bc2f51214210eee46de0272ddddad0679fded18b49044b6d0ba5a082.scope - libcontainer container 95a86231bc2f51214210eee46de0272ddddad0679fded18b49044b6d0ba5a082. Sep 9 00:21:13.088581 containerd[1444]: time="2025-09-09T00:21:13.088523190Z" level=info msg="StartContainer for \"95a86231bc2f51214210eee46de0272ddddad0679fded18b49044b6d0ba5a082\" returns successfully" Sep 9 00:21:13.366582 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 9 00:21:13.996415 kubelet[2449]: E0909 00:21:13.996362 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:15.718651 kubelet[2449]: E0909 00:21:15.718600 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:16.139658 systemd-networkd[1373]: lxc_health: Link UP Sep 9 00:21:16.158425 systemd-networkd[1373]: lxc_health: Gained carrier Sep 9 00:21:17.190763 systemd-networkd[1373]: lxc_health: Gained IPv6LL Sep 9 00:21:17.723265 kubelet[2449]: E0909 00:21:17.721649 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:17.748722 kubelet[2449]: I0909 00:21:17.748064 2449 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-swcv6" podStartSLOduration=8.748045879 podStartE2EDuration="8.748045879s" podCreationTimestamp="2025-09-09 00:21:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:21:14.011061201 +0000 UTC m=+87.335916191" watchObservedRunningTime="2025-09-09 00:21:17.748045879 +0000 UTC m=+91.072900869" Sep 9 00:21:18.004664 kubelet[2449]: E0909 00:21:18.004446 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:18.753497 kubelet[2449]: E0909 00:21:18.753441 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:22.456891 kubelet[2449]: E0909 00:21:22.456803 2449 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:35300->127.0.0.1:38729: write tcp 127.0.0.1:35300->127.0.0.1:38729: write: broken pipe Sep 9 00:21:22.466681 sshd[4290]: pam_unix(sshd:session): session closed for user core Sep 9 00:21:22.470095 systemd[1]: sshd@26-10.0.0.53:22-10.0.0.1:39720.service: Deactivated successfully. Sep 9 00:21:22.472674 systemd[1]: session-27.scope: Deactivated successfully. Sep 9 00:21:22.473313 systemd-logind[1422]: Session 27 logged out. Waiting for processes to exit. Sep 9 00:21:22.474421 systemd-logind[1422]: Removed session 27. Sep 9 00:21:22.752470 kubelet[2449]: E0909 00:21:22.752336 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"