Mar 6 01:43:15.211063 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Mar 5 23:31:42 -00 2026 Mar 6 01:43:15.211085 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a6bcd99e714cc2f1b95dc0d61d9d762252de26a434f12074c16f59200c97ba9c Mar 6 01:43:15.211096 kernel: BIOS-provided physical RAM map: Mar 6 01:43:15.211102 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 6 01:43:15.211107 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 6 01:43:15.211113 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 6 01:43:15.211119 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Mar 6 01:43:15.211124 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 6 01:43:15.211130 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Mar 6 01:43:15.211135 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Mar 6 01:43:15.211143 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Mar 6 01:43:15.211149 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Mar 6 01:43:15.211154 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Mar 6 01:43:15.211160 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Mar 6 01:43:15.211166 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Mar 6 01:43:15.211172 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 6 01:43:15.211180 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Mar 6 01:43:15.211186 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Mar 6 01:43:15.211192 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 6 01:43:15.211198 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 6 01:43:15.211204 kernel: NX (Execute Disable) protection: active Mar 6 01:43:15.211209 kernel: APIC: Static calls initialized Mar 6 01:43:15.211215 kernel: efi: EFI v2.7 by EDK II Mar 6 01:43:15.211221 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Mar 6 01:43:15.211227 kernel: SMBIOS 2.8 present. Mar 6 01:43:15.211233 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Mar 6 01:43:15.211238 kernel: Hypervisor detected: KVM Mar 6 01:43:15.211246 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 6 01:43:15.211252 kernel: kvm-clock: using sched offset of 5802445120 cycles Mar 6 01:43:15.211258 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 6 01:43:15.211265 kernel: tsc: Detected 2445.426 MHz processor Mar 6 01:43:15.211271 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 6 01:43:15.211277 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 6 01:43:15.211283 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Mar 6 01:43:15.211289 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 6 01:43:15.211295 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 6 01:43:15.211304 kernel: Using GB pages for direct mapping Mar 6 01:43:15.211310 kernel: Secure boot disabled Mar 6 01:43:15.211316 kernel: ACPI: Early table checksum verification disabled Mar 6 01:43:15.211322 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Mar 6 01:43:15.211332 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Mar 6 01:43:15.211338 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 01:43:15.211344 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 01:43:15.211353 kernel: ACPI: FACS 0x000000009CBDD000 000040 Mar 6 01:43:15.211359 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 01:43:15.211366 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 01:43:15.211372 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 01:43:15.211378 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 01:43:15.211384 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Mar 6 01:43:15.211391 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Mar 6 01:43:15.211399 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Mar 6 01:43:15.211405 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Mar 6 01:43:15.211412 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Mar 6 01:43:15.211418 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Mar 6 01:43:15.211424 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Mar 6 01:43:15.211476 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Mar 6 01:43:15.211485 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Mar 6 01:43:15.211492 kernel: No NUMA configuration found Mar 6 01:43:15.211498 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Mar 6 01:43:15.211508 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Mar 6 01:43:15.211514 kernel: Zone ranges: Mar 6 01:43:15.211520 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 6 01:43:15.211527 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Mar 6 01:43:15.211533 kernel: Normal empty Mar 6 01:43:15.211539 kernel: Movable zone start for each node Mar 6 01:43:15.211545 kernel: Early memory node ranges Mar 6 01:43:15.211552 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 6 01:43:15.211558 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Mar 6 01:43:15.211564 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Mar 6 01:43:15.211573 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Mar 6 01:43:15.211579 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Mar 6 01:43:15.211585 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Mar 6 01:43:15.211591 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Mar 6 01:43:15.211598 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 6 01:43:15.211604 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 6 01:43:15.211610 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Mar 6 01:43:15.211616 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 6 01:43:15.211622 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Mar 6 01:43:15.211631 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Mar 6 01:43:15.211637 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Mar 6 01:43:15.211643 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 6 01:43:15.211649 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 6 01:43:15.211655 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 6 01:43:15.211662 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 6 01:43:15.211668 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 6 01:43:15.211674 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 6 01:43:15.211680 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 6 01:43:15.211687 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 6 01:43:15.211695 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 6 01:43:15.211702 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 6 01:43:15.211708 kernel: TSC deadline timer available Mar 6 01:43:15.211714 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 6 01:43:15.211720 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 6 01:43:15.211726 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 6 01:43:15.211732 kernel: kvm-guest: setup PV sched yield Mar 6 01:43:15.211739 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Mar 6 01:43:15.211745 kernel: Booting paravirtualized kernel on KVM Mar 6 01:43:15.211754 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 6 01:43:15.211760 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 6 01:43:15.211766 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 6 01:43:15.211773 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 6 01:43:15.211779 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 6 01:43:15.211785 kernel: kvm-guest: PV spinlocks enabled Mar 6 01:43:15.211792 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 6 01:43:15.211799 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a6bcd99e714cc2f1b95dc0d61d9d762252de26a434f12074c16f59200c97ba9c Mar 6 01:43:15.211808 kernel: random: crng init done Mar 6 01:43:15.211814 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 6 01:43:15.211821 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 6 01:43:15.211827 kernel: Fallback order for Node 0: 0 Mar 6 01:43:15.211834 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Mar 6 01:43:15.211840 kernel: Policy zone: DMA32 Mar 6 01:43:15.211846 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 6 01:43:15.211853 kernel: Memory: 2400616K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 166124K reserved, 0K cma-reserved) Mar 6 01:43:15.211859 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 6 01:43:15.211869 kernel: ftrace: allocating 37996 entries in 149 pages Mar 6 01:43:15.211875 kernel: ftrace: allocated 149 pages with 4 groups Mar 6 01:43:15.211881 kernel: Dynamic Preempt: voluntary Mar 6 01:43:15.211888 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 6 01:43:15.211903 kernel: rcu: RCU event tracing is enabled. Mar 6 01:43:15.211912 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 6 01:43:15.211919 kernel: Trampoline variant of Tasks RCU enabled. Mar 6 01:43:15.211925 kernel: Rude variant of Tasks RCU enabled. Mar 6 01:43:15.211932 kernel: Tracing variant of Tasks RCU enabled. Mar 6 01:43:15.211938 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 6 01:43:15.211945 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 6 01:43:15.211952 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 6 01:43:15.211961 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 6 01:43:15.211967 kernel: Console: colour dummy device 80x25 Mar 6 01:43:15.211974 kernel: printk: console [ttyS0] enabled Mar 6 01:43:15.212012 kernel: ACPI: Core revision 20230628 Mar 6 01:43:15.212021 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 6 01:43:15.212030 kernel: APIC: Switch to symmetric I/O mode setup Mar 6 01:43:15.212037 kernel: x2apic enabled Mar 6 01:43:15.212043 kernel: APIC: Switched APIC routing to: physical x2apic Mar 6 01:43:15.212050 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 6 01:43:15.212057 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 6 01:43:15.212063 kernel: kvm-guest: setup PV IPIs Mar 6 01:43:15.212070 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 6 01:43:15.212077 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 6 01:43:15.212083 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 6 01:43:15.212092 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 6 01:43:15.212099 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 6 01:43:15.212105 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 6 01:43:15.212112 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 6 01:43:15.212119 kernel: Spectre V2 : Mitigation: Retpolines Mar 6 01:43:15.212125 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 6 01:43:15.212132 kernel: Speculative Store Bypass: Vulnerable Mar 6 01:43:15.212139 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 6 01:43:15.212146 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 6 01:43:15.212155 kernel: active return thunk: srso_alias_return_thunk Mar 6 01:43:15.212161 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 6 01:43:15.212168 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 6 01:43:15.212174 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 6 01:43:15.212181 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 6 01:43:15.212187 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 6 01:43:15.212194 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 6 01:43:15.212200 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 6 01:43:15.212210 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 6 01:43:15.212217 kernel: Freeing SMP alternatives memory: 32K Mar 6 01:43:15.212223 kernel: pid_max: default: 32768 minimum: 301 Mar 6 01:43:15.212230 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 6 01:43:15.212236 kernel: landlock: Up and running. Mar 6 01:43:15.212244 kernel: SELinux: Initializing. Mar 6 01:43:15.212256 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 6 01:43:15.212268 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 6 01:43:15.212280 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 6 01:43:15.212297 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 6 01:43:15.212307 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 6 01:43:15.212318 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 6 01:43:15.212329 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 6 01:43:15.212342 kernel: signal: max sigframe size: 1776 Mar 6 01:43:15.212353 kernel: rcu: Hierarchical SRCU implementation. Mar 6 01:43:15.212366 kernel: rcu: Max phase no-delay instances is 400. Mar 6 01:43:15.212481 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 6 01:43:15.212490 kernel: smp: Bringing up secondary CPUs ... Mar 6 01:43:15.212502 kernel: smpboot: x86: Booting SMP configuration: Mar 6 01:43:15.212509 kernel: .... node #0, CPUs: #1 #2 #3 Mar 6 01:43:15.212515 kernel: smp: Brought up 1 node, 4 CPUs Mar 6 01:43:15.212522 kernel: smpboot: Max logical packages: 1 Mar 6 01:43:15.212529 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 6 01:43:15.212536 kernel: devtmpfs: initialized Mar 6 01:43:15.212542 kernel: x86/mm: Memory block size: 128MB Mar 6 01:43:15.212549 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Mar 6 01:43:15.212555 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Mar 6 01:43:15.212564 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Mar 6 01:43:15.212571 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Mar 6 01:43:15.212578 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Mar 6 01:43:15.212584 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 6 01:43:15.212592 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 6 01:43:15.212605 kernel: pinctrl core: initialized pinctrl subsystem Mar 6 01:43:15.212618 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 6 01:43:15.212628 kernel: audit: initializing netlink subsys (disabled) Mar 6 01:43:15.212640 kernel: audit: type=2000 audit(1772761393.029:1): state=initialized audit_enabled=0 res=1 Mar 6 01:43:15.212657 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 6 01:43:15.212670 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 6 01:43:15.212682 kernel: cpuidle: using governor menu Mar 6 01:43:15.212693 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 6 01:43:15.212706 kernel: dca service started, version 1.12.1 Mar 6 01:43:15.212717 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 6 01:43:15.212728 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 6 01:43:15.212738 kernel: PCI: Using configuration type 1 for base access Mar 6 01:43:15.212750 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 6 01:43:15.212798 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 6 01:43:15.212812 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 6 01:43:15.212826 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 6 01:43:15.212837 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 6 01:43:15.212849 kernel: ACPI: Added _OSI(Module Device) Mar 6 01:43:15.212859 kernel: ACPI: Added _OSI(Processor Device) Mar 6 01:43:15.212866 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 6 01:43:15.212872 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 6 01:43:15.212879 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 6 01:43:15.212890 kernel: ACPI: Interpreter enabled Mar 6 01:43:15.212896 kernel: ACPI: PM: (supports S0 S3 S5) Mar 6 01:43:15.212903 kernel: ACPI: Using IOAPIC for interrupt routing Mar 6 01:43:15.212910 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 6 01:43:15.212917 kernel: PCI: Using E820 reservations for host bridge windows Mar 6 01:43:15.212924 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 6 01:43:15.212930 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 6 01:43:15.213196 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 6 01:43:15.213393 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 6 01:43:15.213613 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 6 01:43:15.213628 kernel: PCI host bridge to bus 0000:00 Mar 6 01:43:15.213756 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 6 01:43:15.213870 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 6 01:43:15.214028 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 6 01:43:15.214149 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 6 01:43:15.214266 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 6 01:43:15.214376 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Mar 6 01:43:15.214551 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 6 01:43:15.214693 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 6 01:43:15.214827 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 6 01:43:15.214949 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Mar 6 01:43:15.215159 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Mar 6 01:43:15.215286 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Mar 6 01:43:15.215405 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Mar 6 01:43:15.215600 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 6 01:43:15.215738 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 6 01:43:15.215860 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Mar 6 01:43:15.216023 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Mar 6 01:43:15.216160 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Mar 6 01:43:15.216288 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 6 01:43:15.216410 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Mar 6 01:43:15.216596 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Mar 6 01:43:15.216721 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Mar 6 01:43:15.216850 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 6 01:43:15.217045 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Mar 6 01:43:15.217245 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Mar 6 01:43:15.217517 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Mar 6 01:43:15.217661 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Mar 6 01:43:15.217793 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 6 01:43:15.218030 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 6 01:43:15.218226 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 6 01:43:15.218358 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Mar 6 01:43:15.218568 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Mar 6 01:43:15.218704 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 6 01:43:15.218825 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Mar 6 01:43:15.218835 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 6 01:43:15.218842 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 6 01:43:15.218849 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 6 01:43:15.218856 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 6 01:43:15.218867 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 6 01:43:15.218873 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 6 01:43:15.218880 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 6 01:43:15.218887 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 6 01:43:15.218893 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 6 01:43:15.218900 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 6 01:43:15.218907 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 6 01:43:15.218913 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 6 01:43:15.218920 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 6 01:43:15.218928 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 6 01:43:15.218935 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 6 01:43:15.218941 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 6 01:43:15.218948 kernel: iommu: Default domain type: Translated Mar 6 01:43:15.218955 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 6 01:43:15.218962 kernel: efivars: Registered efivars operations Mar 6 01:43:15.218969 kernel: PCI: Using ACPI for IRQ routing Mar 6 01:43:15.218975 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 6 01:43:15.219026 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Mar 6 01:43:15.219037 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Mar 6 01:43:15.219044 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Mar 6 01:43:15.219050 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Mar 6 01:43:15.219177 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 6 01:43:15.219298 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 6 01:43:15.219416 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 6 01:43:15.219425 kernel: vgaarb: loaded Mar 6 01:43:15.219490 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 6 01:43:15.219499 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 6 01:43:15.219510 kernel: clocksource: Switched to clocksource kvm-clock Mar 6 01:43:15.219517 kernel: VFS: Disk quotas dquot_6.6.0 Mar 6 01:43:15.219524 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 6 01:43:15.219530 kernel: pnp: PnP ACPI init Mar 6 01:43:15.219675 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 6 01:43:15.219686 kernel: pnp: PnP ACPI: found 6 devices Mar 6 01:43:15.219693 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 6 01:43:15.219700 kernel: NET: Registered PF_INET protocol family Mar 6 01:43:15.219710 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 6 01:43:15.219716 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 6 01:43:15.219723 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 6 01:43:15.219730 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 6 01:43:15.219736 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 6 01:43:15.219743 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 6 01:43:15.219750 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 6 01:43:15.219756 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 6 01:43:15.219763 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 6 01:43:15.219772 kernel: NET: Registered PF_XDP protocol family Mar 6 01:43:15.219895 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Mar 6 01:43:15.220070 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Mar 6 01:43:15.220187 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 6 01:43:15.220297 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 6 01:43:15.220406 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 6 01:43:15.220574 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 6 01:43:15.220689 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 6 01:43:15.220805 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Mar 6 01:43:15.220814 kernel: PCI: CLS 0 bytes, default 64 Mar 6 01:43:15.220821 kernel: Initialise system trusted keyrings Mar 6 01:43:15.220828 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 6 01:43:15.220835 kernel: Key type asymmetric registered Mar 6 01:43:15.220842 kernel: Asymmetric key parser 'x509' registered Mar 6 01:43:15.220848 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 6 01:43:15.220855 kernel: io scheduler mq-deadline registered Mar 6 01:43:15.220861 kernel: io scheduler kyber registered Mar 6 01:43:15.220871 kernel: io scheduler bfq registered Mar 6 01:43:15.220878 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 6 01:43:15.220885 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 6 01:43:15.220891 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 6 01:43:15.220898 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 6 01:43:15.220905 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 6 01:43:15.220911 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 6 01:43:15.220918 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 6 01:43:15.220925 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 6 01:43:15.220934 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 6 01:43:15.221108 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 6 01:43:15.221121 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 6 01:43:15.221234 kernel: rtc_cmos 00:04: registered as rtc0 Mar 6 01:43:15.221349 kernel: rtc_cmos 00:04: setting system clock to 2026-03-06T01:43:14 UTC (1772761394) Mar 6 01:43:15.221531 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 6 01:43:15.221543 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 6 01:43:15.221554 kernel: efifb: probing for efifb Mar 6 01:43:15.221561 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Mar 6 01:43:15.221567 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Mar 6 01:43:15.221574 kernel: efifb: scrolling: redraw Mar 6 01:43:15.221580 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Mar 6 01:43:15.221587 kernel: Console: switching to colour frame buffer device 100x37 Mar 6 01:43:15.221594 kernel: fb0: EFI VGA frame buffer device Mar 6 01:43:15.221601 kernel: pstore: Using crash dump compression: deflate Mar 6 01:43:15.221607 kernel: pstore: Registered efi_pstore as persistent store backend Mar 6 01:43:15.221614 kernel: NET: Registered PF_INET6 protocol family Mar 6 01:43:15.221623 kernel: Segment Routing with IPv6 Mar 6 01:43:15.221629 kernel: In-situ OAM (IOAM) with IPv6 Mar 6 01:43:15.221636 kernel: NET: Registered PF_PACKET protocol family Mar 6 01:43:15.221642 kernel: Key type dns_resolver registered Mar 6 01:43:15.221650 kernel: IPI shorthand broadcast: enabled Mar 6 01:43:15.221675 kernel: sched_clock: Marking stable (1050040855, 436089314)->(1923309901, -437179732) Mar 6 01:43:15.221684 kernel: registered taskstats version 1 Mar 6 01:43:15.221691 kernel: Loading compiled-in X.509 certificates Mar 6 01:43:15.221698 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 6d88f6264570591a57b3c9c1e1c99fca6c68b8ca' Mar 6 01:43:15.221707 kernel: Key type .fscrypt registered Mar 6 01:43:15.221714 kernel: Key type fscrypt-provisioning registered Mar 6 01:43:15.221721 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 6 01:43:15.221728 kernel: ima: Allocated hash algorithm: sha1 Mar 6 01:43:15.221735 kernel: ima: No architecture policies found Mar 6 01:43:15.221741 kernel: clk: Disabling unused clocks Mar 6 01:43:15.221748 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 6 01:43:15.221755 kernel: Write protecting the kernel read-only data: 36864k Mar 6 01:43:15.221765 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 6 01:43:15.221772 kernel: Run /init as init process Mar 6 01:43:15.221779 kernel: with arguments: Mar 6 01:43:15.221785 kernel: /init Mar 6 01:43:15.221792 kernel: with environment: Mar 6 01:43:15.221799 kernel: HOME=/ Mar 6 01:43:15.221806 kernel: TERM=linux Mar 6 01:43:15.221814 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 6 01:43:15.221826 systemd[1]: Detected virtualization kvm. Mar 6 01:43:15.221834 systemd[1]: Detected architecture x86-64. Mar 6 01:43:15.221841 systemd[1]: Running in initrd. Mar 6 01:43:15.221848 systemd[1]: No hostname configured, using default hostname. Mar 6 01:43:15.221855 systemd[1]: Hostname set to . Mar 6 01:43:15.221862 systemd[1]: Initializing machine ID from VM UUID. Mar 6 01:43:15.221869 systemd[1]: Queued start job for default target initrd.target. Mar 6 01:43:15.221877 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 6 01:43:15.221887 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 6 01:43:15.221895 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 6 01:43:15.221903 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 6 01:43:15.221910 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 6 01:43:15.221945 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 6 01:43:15.221958 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 6 01:43:15.221965 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 6 01:43:15.221973 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 6 01:43:15.222010 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 6 01:43:15.222018 systemd[1]: Reached target paths.target - Path Units. Mar 6 01:43:15.222026 systemd[1]: Reached target slices.target - Slice Units. Mar 6 01:43:15.222033 systemd[1]: Reached target swap.target - Swaps. Mar 6 01:43:15.222044 systemd[1]: Reached target timers.target - Timer Units. Mar 6 01:43:15.222051 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 6 01:43:15.222059 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 6 01:43:15.222066 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 6 01:43:15.222074 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 6 01:43:15.222081 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 6 01:43:15.222089 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 6 01:43:15.222096 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 6 01:43:15.222106 systemd[1]: Reached target sockets.target - Socket Units. Mar 6 01:43:15.222113 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 6 01:43:15.222123 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 6 01:43:15.222130 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 6 01:43:15.222138 systemd[1]: Starting systemd-fsck-usr.service... Mar 6 01:43:15.222145 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 6 01:43:15.222153 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 6 01:43:15.222160 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 01:43:15.222168 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 6 01:43:15.222178 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 6 01:43:15.222185 systemd[1]: Finished systemd-fsck-usr.service. Mar 6 01:43:15.222193 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 6 01:43:15.222200 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 6 01:43:15.222210 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 6 01:43:15.222240 systemd-journald[194]: Collecting audit messages is disabled. Mar 6 01:43:15.222258 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 01:43:15.222266 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 6 01:43:15.222277 systemd-journald[194]: Journal started Mar 6 01:43:15.222293 systemd-journald[194]: Runtime Journal (/run/log/journal/26443e78b2e346bdad50c32af5749fb0) is 6.0M, max 48.3M, 42.2M free. Mar 6 01:43:15.192176 systemd-modules-load[195]: Inserted module 'overlay' Mar 6 01:43:15.242056 systemd[1]: Started systemd-journald.service - Journal Service. Mar 6 01:43:15.253512 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 6 01:43:15.257878 systemd-modules-load[195]: Inserted module 'br_netfilter' Mar 6 01:43:15.262266 kernel: Bridge firewalling registered Mar 6 01:43:15.262791 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 6 01:43:15.264113 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 6 01:43:15.264742 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 6 01:43:15.266619 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 6 01:43:15.286886 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 6 01:43:15.309703 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 6 01:43:15.318253 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 6 01:43:15.324203 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 6 01:43:15.338964 dracut-cmdline[225]: dracut-dracut-053 Mar 6 01:43:15.338964 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a6bcd99e714cc2f1b95dc0d61d9d762252de26a434f12074c16f59200c97ba9c Mar 6 01:43:15.369827 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 6 01:43:15.425715 systemd-resolved[260]: Positive Trust Anchors: Mar 6 01:43:15.425769 systemd-resolved[260]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 6 01:43:15.425816 systemd-resolved[260]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 6 01:43:15.430188 systemd-resolved[260]: Defaulting to hostname 'linux'. Mar 6 01:43:15.432109 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 6 01:43:15.467561 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 6 01:43:15.524558 kernel: SCSI subsystem initialized Mar 6 01:43:15.539529 kernel: Loading iSCSI transport class v2.0-870. Mar 6 01:43:15.557522 kernel: iscsi: registered transport (tcp) Mar 6 01:43:15.586232 kernel: iscsi: registered transport (qla4xxx) Mar 6 01:43:15.586302 kernel: QLogic iSCSI HBA Driver Mar 6 01:43:15.661675 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 6 01:43:15.684749 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 6 01:43:15.732516 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 6 01:43:15.732599 kernel: device-mapper: uevent: version 1.0.3 Mar 6 01:43:15.734472 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 6 01:43:15.799546 kernel: raid6: avx2x4 gen() 21490 MB/s Mar 6 01:43:15.821529 kernel: raid6: avx2x2 gen() 22860 MB/s Mar 6 01:43:15.841696 kernel: raid6: avx2x1 gen() 15341 MB/s Mar 6 01:43:15.841759 kernel: raid6: using algorithm avx2x2 gen() 22860 MB/s Mar 6 01:43:15.862813 kernel: raid6: .... xor() 15820 MB/s, rmw enabled Mar 6 01:43:15.862898 kernel: raid6: using avx2x2 recovery algorithm Mar 6 01:43:15.892539 kernel: xor: automatically using best checksumming function avx Mar 6 01:43:16.173518 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 6 01:43:16.189296 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 6 01:43:16.198916 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 6 01:43:16.215745 systemd-udevd[415]: Using default interface naming scheme 'v255'. Mar 6 01:43:16.221954 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 6 01:43:16.239812 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 6 01:43:16.258653 dracut-pre-trigger[419]: rd.md=0: removing MD RAID activation Mar 6 01:43:16.302018 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 6 01:43:16.315717 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 6 01:43:16.405103 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 6 01:43:16.419703 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 6 01:43:16.441535 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 6 01:43:16.450272 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 6 01:43:16.459837 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 6 01:43:16.467629 kernel: cryptd: max_cpu_qlen set to 1000 Mar 6 01:43:16.471232 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 6 01:43:16.489503 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 6 01:43:16.490680 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 6 01:43:16.508398 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 6 01:43:16.538632 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 6 01:43:16.539257 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 6 01:43:16.539287 kernel: GPT:9289727 != 19775487 Mar 6 01:43:16.539347 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 6 01:43:16.539374 kernel: GPT:9289727 != 19775487 Mar 6 01:43:16.539399 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 6 01:43:16.539423 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 6 01:43:16.544372 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 6 01:43:16.545278 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 6 01:43:16.556144 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 6 01:43:16.562968 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 6 01:43:16.563167 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 01:43:16.570472 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 01:43:16.589749 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 01:43:16.607052 kernel: AVX2 version of gcm_enc/dec engaged. Mar 6 01:43:16.607130 kernel: AES CTR mode by8 optimization enabled Mar 6 01:43:16.607150 kernel: libata version 3.00 loaded. Mar 6 01:43:16.617532 kernel: ahci 0000:00:1f.2: version 3.0 Mar 6 01:43:16.622611 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 6 01:43:16.640968 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 6 01:43:16.641385 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 6 01:43:16.649531 kernel: scsi host0: ahci Mar 6 01:43:16.649796 kernel: BTRFS: device fsid eccec0b1-0068-4620-ab61-f332f16460fa devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (476) Mar 6 01:43:16.652353 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 6 01:43:16.662661 kernel: scsi host1: ahci Mar 6 01:43:16.662952 kernel: scsi host2: ahci Mar 6 01:43:16.663040 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (471) Mar 6 01:43:16.673726 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 01:43:16.692587 kernel: scsi host3: ahci Mar 6 01:43:16.692834 kernel: scsi host4: ahci Mar 6 01:43:16.693137 kernel: scsi host5: ahci Mar 6 01:43:16.693356 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Mar 6 01:43:16.693369 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Mar 6 01:43:16.693380 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Mar 6 01:43:16.700602 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Mar 6 01:43:16.700645 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Mar 6 01:43:16.704698 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Mar 6 01:43:16.721978 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 6 01:43:16.758208 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 6 01:43:16.769761 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 6 01:43:16.784297 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 6 01:43:16.804732 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 6 01:43:16.813623 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 6 01:43:16.830298 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 6 01:43:16.830326 disk-uuid[567]: Primary Header is updated. Mar 6 01:43:16.830326 disk-uuid[567]: Secondary Entries is updated. Mar 6 01:43:16.830326 disk-uuid[567]: Secondary Header is updated. Mar 6 01:43:16.843492 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 6 01:43:16.848554 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 6 01:43:17.025495 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 6 01:43:17.025574 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 6 01:43:17.025591 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 6 01:43:17.028541 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 6 01:43:17.032709 kernel: ata3.00: applying bridge limits Mar 6 01:43:17.033528 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 6 01:43:17.035501 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 6 01:43:17.038526 kernel: ata3.00: configured for UDMA/100 Mar 6 01:43:17.040560 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 6 01:43:17.043496 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 6 01:43:17.097014 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 6 01:43:17.097377 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 6 01:43:17.110580 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 6 01:43:17.837560 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 6 01:43:17.837623 disk-uuid[569]: The operation has completed successfully. Mar 6 01:43:17.881146 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 6 01:43:17.881300 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 6 01:43:17.910824 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 6 01:43:17.926136 sh[594]: Success Mar 6 01:43:17.949524 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 6 01:43:17.999982 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 6 01:43:18.022482 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 6 01:43:18.027390 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 6 01:43:18.050301 kernel: BTRFS info (device dm-0): first mount of filesystem eccec0b1-0068-4620-ab61-f332f16460fa Mar 6 01:43:18.050363 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 6 01:43:18.050381 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 6 01:43:18.057855 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 6 01:43:18.057890 kernel: BTRFS info (device dm-0): using free space tree Mar 6 01:43:18.072148 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 6 01:43:18.077042 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 6 01:43:18.092762 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 6 01:43:18.097958 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 6 01:43:18.119662 kernel: BTRFS info (device vda6): first mount of filesystem dcd455b6-671f-4d9f-a5ce-de07977c88a5 Mar 6 01:43:18.119686 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 6 01:43:18.119697 kernel: BTRFS info (device vda6): using free space tree Mar 6 01:43:18.129493 kernel: BTRFS info (device vda6): auto enabling async discard Mar 6 01:43:18.143975 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 6 01:43:18.152585 kernel: BTRFS info (device vda6): last unmount of filesystem dcd455b6-671f-4d9f-a5ce-de07977c88a5 Mar 6 01:43:18.159615 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 6 01:43:18.176651 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 6 01:43:18.246729 ignition[696]: Ignition 2.19.0 Mar 6 01:43:18.246768 ignition[696]: Stage: fetch-offline Mar 6 01:43:18.246837 ignition[696]: no configs at "/usr/lib/ignition/base.d" Mar 6 01:43:18.246857 ignition[696]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 01:43:18.247029 ignition[696]: parsed url from cmdline: "" Mar 6 01:43:18.247038 ignition[696]: no config URL provided Mar 6 01:43:18.247048 ignition[696]: reading system config file "/usr/lib/ignition/user.ign" Mar 6 01:43:18.247064 ignition[696]: no config at "/usr/lib/ignition/user.ign" Mar 6 01:43:18.249509 ignition[696]: op(1): [started] loading QEMU firmware config module Mar 6 01:43:18.249527 ignition[696]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 6 01:43:18.278924 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 6 01:43:18.290550 ignition[696]: op(1): [finished] loading QEMU firmware config module Mar 6 01:43:18.296801 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 6 01:43:18.326923 systemd-networkd[781]: lo: Link UP Mar 6 01:43:18.326957 systemd-networkd[781]: lo: Gained carrier Mar 6 01:43:18.329553 systemd-networkd[781]: Enumeration completed Mar 6 01:43:18.330826 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 01:43:18.330833 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 6 01:43:18.334601 systemd-networkd[781]: eth0: Link UP Mar 6 01:43:18.334608 systemd-networkd[781]: eth0: Gained carrier Mar 6 01:43:18.334619 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 01:43:18.360790 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 6 01:43:18.365522 systemd-networkd[781]: eth0: DHCPv4 address 10.0.0.105/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 6 01:43:18.375133 systemd[1]: Reached target network.target - Network. Mar 6 01:43:18.536415 ignition[696]: parsing config with SHA512: 0241f647e4ba5c526a712d23b295b318cd0fc8420ed8833f4200963e343f8f2ed0b9a9529b9d8a69fd55ac98d2810e0eb09bc02493ff43c22431ef5f5eecfe90 Mar 6 01:43:18.540265 unknown[696]: fetched base config from "system" Mar 6 01:43:18.540531 unknown[696]: fetched user config from "qemu" Mar 6 01:43:18.540936 ignition[696]: fetch-offline: fetch-offline passed Mar 6 01:43:18.541052 ignition[696]: Ignition finished successfully Mar 6 01:43:18.555940 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 6 01:43:18.556882 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 6 01:43:18.575728 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 6 01:43:18.623607 ignition[785]: Ignition 2.19.0 Mar 6 01:43:18.623636 ignition[785]: Stage: kargs Mar 6 01:43:18.623799 ignition[785]: no configs at "/usr/lib/ignition/base.d" Mar 6 01:43:18.623811 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 01:43:18.624645 ignition[785]: kargs: kargs passed Mar 6 01:43:18.624692 ignition[785]: Ignition finished successfully Mar 6 01:43:18.641738 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 6 01:43:18.657763 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 6 01:43:18.680119 ignition[793]: Ignition 2.19.0 Mar 6 01:43:18.680171 ignition[793]: Stage: disks Mar 6 01:43:18.683525 ignition[793]: no configs at "/usr/lib/ignition/base.d" Mar 6 01:43:18.683549 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 01:43:18.696232 ignition[793]: disks: disks passed Mar 6 01:43:18.696341 ignition[793]: Ignition finished successfully Mar 6 01:43:18.704909 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 6 01:43:18.709784 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 6 01:43:18.724604 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 6 01:43:18.735359 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 6 01:43:18.740039 systemd[1]: Reached target sysinit.target - System Initialization. Mar 6 01:43:18.740160 systemd[1]: Reached target basic.target - Basic System. Mar 6 01:43:18.765779 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 6 01:43:18.792973 systemd-fsck[804]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 6 01:43:18.802358 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 6 01:43:18.828300 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 6 01:43:18.983522 kernel: EXT4-fs (vda9): mounted filesystem 6fb83788-0471-4e89-b45f-3a7586a627a9 r/w with ordered data mode. Quota mode: none. Mar 6 01:43:18.984893 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 6 01:43:18.989282 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 6 01:43:19.009098 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 6 01:43:19.014350 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 6 01:43:19.055562 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (812) Mar 6 01:43:19.055590 kernel: BTRFS info (device vda6): first mount of filesystem dcd455b6-671f-4d9f-a5ce-de07977c88a5 Mar 6 01:43:19.055609 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 6 01:43:19.055619 kernel: BTRFS info (device vda6): using free space tree Mar 6 01:43:19.055629 kernel: BTRFS info (device vda6): auto enabling async discard Mar 6 01:43:19.028563 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 6 01:43:19.028636 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 6 01:43:19.028676 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 6 01:43:19.057169 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 6 01:43:19.064178 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 6 01:43:19.094831 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 6 01:43:19.156167 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Mar 6 01:43:19.163146 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Mar 6 01:43:19.169716 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Mar 6 01:43:19.180352 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Mar 6 01:43:19.342600 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 6 01:43:19.363597 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 6 01:43:19.368529 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 6 01:43:19.390580 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 6 01:43:19.398105 kernel: BTRFS info (device vda6): last unmount of filesystem dcd455b6-671f-4d9f-a5ce-de07977c88a5 Mar 6 01:43:19.410206 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 6 01:43:19.463375 ignition[926]: INFO : Ignition 2.19.0 Mar 6 01:43:19.463375 ignition[926]: INFO : Stage: mount Mar 6 01:43:19.468874 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 6 01:43:19.468874 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 01:43:19.468874 ignition[926]: INFO : mount: mount passed Mar 6 01:43:19.468874 ignition[926]: INFO : Ignition finished successfully Mar 6 01:43:19.472966 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 6 01:43:19.490657 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 6 01:43:19.499301 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 6 01:43:19.526498 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (938) Mar 6 01:43:19.534853 kernel: BTRFS info (device vda6): first mount of filesystem dcd455b6-671f-4d9f-a5ce-de07977c88a5 Mar 6 01:43:19.534890 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 6 01:43:19.534909 kernel: BTRFS info (device vda6): using free space tree Mar 6 01:43:19.543479 kernel: BTRFS info (device vda6): auto enabling async discard Mar 6 01:43:19.545421 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 6 01:43:19.740885 ignition[955]: INFO : Ignition 2.19.0 Mar 6 01:43:19.740885 ignition[955]: INFO : Stage: files Mar 6 01:43:19.745385 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 6 01:43:19.745385 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 01:43:19.745385 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Mar 6 01:43:19.755489 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 6 01:43:19.755489 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 6 01:43:19.764699 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 6 01:43:19.764699 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 6 01:43:19.764699 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 6 01:43:19.764699 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 6 01:43:19.764699 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 6 01:43:19.759509 unknown[955]: wrote ssh authorized keys file for user: core Mar 6 01:43:19.853768 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 6 01:43:19.988648 systemd-networkd[781]: eth0: Gained IPv6LL Mar 6 01:43:20.039514 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 6 01:43:20.039514 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 6 01:43:20.049972 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 6 01:43:20.236393 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 6 01:43:20.822045 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 6 01:43:20.822045 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 6 01:43:20.836626 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 6 01:43:20.836626 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 6 01:43:20.836626 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 6 01:43:20.836626 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 6 01:43:20.836626 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 6 01:43:20.836626 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 6 01:43:20.836626 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 6 01:43:20.836626 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 6 01:43:20.836626 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 6 01:43:20.836626 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 6 01:43:20.836626 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 6 01:43:20.836626 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 6 01:43:20.836626 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Mar 6 01:43:21.207914 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 6 01:43:22.947639 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 6 01:43:22.947639 ignition[955]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 6 01:43:22.959869 ignition[955]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 6 01:43:22.959869 ignition[955]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 6 01:43:22.959869 ignition[955]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 6 01:43:22.959869 ignition[955]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 6 01:43:22.959869 ignition[955]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 6 01:43:22.959869 ignition[955]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 6 01:43:22.959869 ignition[955]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 6 01:43:22.959869 ignition[955]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Mar 6 01:43:23.013088 ignition[955]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 6 01:43:23.031931 ignition[955]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 6 01:43:23.037938 ignition[955]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Mar 6 01:43:23.037938 ignition[955]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Mar 6 01:43:23.046047 ignition[955]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Mar 6 01:43:23.050916 ignition[955]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 6 01:43:23.050916 ignition[955]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 6 01:43:23.050916 ignition[955]: INFO : files: files passed Mar 6 01:43:23.050916 ignition[955]: INFO : Ignition finished successfully Mar 6 01:43:23.068966 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 6 01:43:23.090863 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 6 01:43:23.100349 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 6 01:43:23.109812 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 6 01:43:23.110056 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 6 01:43:23.124202 initrd-setup-root-after-ignition[982]: grep: /sysroot/oem/oem-release: No such file or directory Mar 6 01:43:23.129656 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 6 01:43:23.129656 initrd-setup-root-after-ignition[984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 6 01:43:23.142990 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 6 01:43:23.145677 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 6 01:43:23.153272 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 6 01:43:23.174716 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 6 01:43:23.208673 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 6 01:43:23.208896 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 6 01:43:23.216687 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 6 01:43:23.223736 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 6 01:43:23.227170 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 6 01:43:23.244691 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 6 01:43:23.260376 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 6 01:43:23.278625 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 6 01:43:23.290101 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 6 01:43:23.293913 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 6 01:43:23.300536 systemd[1]: Stopped target timers.target - Timer Units. Mar 6 01:43:23.303868 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 6 01:43:23.303997 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 6 01:43:23.432618 ignition[1009]: INFO : Ignition 2.19.0 Mar 6 01:43:23.432618 ignition[1009]: INFO : Stage: umount Mar 6 01:43:23.432618 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 6 01:43:23.432618 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 01:43:23.432618 ignition[1009]: INFO : umount: umount passed Mar 6 01:43:23.432618 ignition[1009]: INFO : Ignition finished successfully Mar 6 01:43:23.306571 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 6 01:43:23.307166 systemd[1]: Stopped target basic.target - Basic System. Mar 6 01:43:23.308532 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 6 01:43:23.309129 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 6 01:43:23.310082 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 6 01:43:23.311044 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 6 01:43:23.311506 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 6 01:43:23.312082 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 6 01:43:23.312416 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 6 01:43:23.313427 systemd[1]: Stopped target swap.target - Swaps. Mar 6 01:43:23.314099 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 6 01:43:23.314224 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 6 01:43:23.318123 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 6 01:43:23.319616 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 6 01:43:23.320158 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 6 01:43:23.320555 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 6 01:43:23.320785 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 6 01:43:23.320913 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 6 01:43:23.322581 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 6 01:43:23.322713 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 6 01:43:23.323157 systemd[1]: Stopped target paths.target - Path Units. Mar 6 01:43:23.323563 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 6 01:43:23.327570 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 6 01:43:23.327772 systemd[1]: Stopped target slices.target - Slice Units. Mar 6 01:43:23.329872 systemd[1]: Stopped target sockets.target - Socket Units. Mar 6 01:43:23.330142 systemd[1]: iscsid.socket: Deactivated successfully. Mar 6 01:43:23.330270 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 6 01:43:23.331054 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 6 01:43:23.331156 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 6 01:43:23.332080 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 6 01:43:23.332212 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 6 01:43:23.332571 systemd[1]: ignition-files.service: Deactivated successfully. Mar 6 01:43:23.332685 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 6 01:43:23.334007 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 6 01:43:23.334283 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 6 01:43:23.334504 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 6 01:43:23.335878 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 6 01:43:23.337223 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 6 01:43:23.337366 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 6 01:43:23.340684 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 6 01:43:23.340948 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 6 01:43:23.345680 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 6 01:43:23.345810 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 6 01:43:23.361160 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 6 01:43:23.361350 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 6 01:43:23.362582 systemd[1]: Stopped target network.target - Network. Mar 6 01:43:23.362932 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 6 01:43:23.362986 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 6 01:43:23.363991 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 6 01:43:23.364078 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 6 01:43:23.364991 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 6 01:43:23.365072 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 6 01:43:23.365506 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 6 01:43:23.365551 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 6 01:43:23.366231 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 6 01:43:23.367087 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 6 01:43:23.369337 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 6 01:43:23.429300 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 6 01:43:23.432615 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 6 01:43:23.437774 systemd-networkd[781]: eth0: DHCPv6 lease lost Mar 6 01:43:23.493214 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 6 01:43:23.493486 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 6 01:43:23.505260 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 6 01:43:23.505524 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 6 01:43:23.521574 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 6 01:43:23.521650 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 6 01:43:23.531277 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 6 01:43:23.531362 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 6 01:43:23.552739 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 6 01:43:23.554833 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 6 01:43:23.554936 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 6 01:43:23.563657 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 6 01:43:23.563719 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 6 01:43:23.569297 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 6 01:43:23.569378 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 6 01:43:23.574963 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 6 01:43:23.575056 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 6 01:43:23.578275 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 6 01:43:23.610662 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 6 01:43:23.610857 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 6 01:43:23.614724 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 6 01:43:23.614786 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 6 01:43:23.620934 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 6 01:43:23.620979 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 6 01:43:23.626518 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 6 01:43:23.626574 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 6 01:43:23.635078 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 6 01:43:23.635133 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 6 01:43:23.648243 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 6 01:43:23.648328 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 6 01:43:23.807133 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 6 01:43:23.818216 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 6 01:43:23.819169 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 6 01:43:23.841271 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 6 01:43:23.841516 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 01:43:23.863564 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 6 01:43:23.863838 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 6 01:43:23.897667 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 6 01:43:23.904637 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 6 01:43:23.916395 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 6 01:43:23.942783 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 6 01:43:23.952106 systemd[1]: Switching root. Mar 6 01:43:23.987772 systemd-journald[194]: Journal stopped Mar 6 01:43:25.485827 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Mar 6 01:43:25.485929 kernel: SELinux: policy capability network_peer_controls=1 Mar 6 01:43:25.485952 kernel: SELinux: policy capability open_perms=1 Mar 6 01:43:25.485988 kernel: SELinux: policy capability extended_socket_class=1 Mar 6 01:43:25.486007 kernel: SELinux: policy capability always_check_network=0 Mar 6 01:43:25.486060 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 6 01:43:25.486081 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 6 01:43:25.486099 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 6 01:43:25.486120 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 6 01:43:25.486137 kernel: audit: type=1403 audit(1772761404.211:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 6 01:43:25.486170 systemd[1]: Successfully loaded SELinux policy in 65.355ms. Mar 6 01:43:25.486207 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 19.145ms. Mar 6 01:43:25.486230 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 6 01:43:25.486250 systemd[1]: Detected virtualization kvm. Mar 6 01:43:25.486270 systemd[1]: Detected architecture x86-64. Mar 6 01:43:25.486288 systemd[1]: Detected first boot. Mar 6 01:43:25.486308 systemd[1]: Initializing machine ID from VM UUID. Mar 6 01:43:25.486326 zram_generator::config[1054]: No configuration found. Mar 6 01:43:25.486350 systemd[1]: Populated /etc with preset unit settings. Mar 6 01:43:25.486370 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 6 01:43:25.486386 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 6 01:43:25.486405 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 6 01:43:25.486426 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 6 01:43:25.486526 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 6 01:43:25.486548 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 6 01:43:25.486567 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 6 01:43:25.486593 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 6 01:43:25.486614 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 6 01:43:25.486633 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 6 01:43:25.486653 systemd[1]: Created slice user.slice - User and Session Slice. Mar 6 01:43:25.486672 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 6 01:43:25.486691 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 6 01:43:25.486711 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 6 01:43:25.486730 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 6 01:43:25.486749 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 6 01:43:25.486774 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 6 01:43:25.486793 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 6 01:43:25.486822 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 6 01:43:25.486843 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 6 01:43:25.486864 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 6 01:43:25.486884 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 6 01:43:25.486904 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 6 01:43:25.486923 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 6 01:43:25.486947 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 6 01:43:25.486974 systemd[1]: Reached target slices.target - Slice Units. Mar 6 01:43:25.486993 systemd[1]: Reached target swap.target - Swaps. Mar 6 01:43:25.487013 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 6 01:43:25.487067 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 6 01:43:25.487088 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 6 01:43:25.487108 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 6 01:43:25.487129 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 6 01:43:25.487148 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 6 01:43:25.487173 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 6 01:43:25.487192 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 6 01:43:25.487213 systemd[1]: Mounting media.mount - External Media Directory... Mar 6 01:43:25.487232 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 01:43:25.487252 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 6 01:43:25.487272 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 6 01:43:25.487291 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 6 01:43:25.487311 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 6 01:43:25.487332 systemd[1]: Reached target machines.target - Containers. Mar 6 01:43:25.487356 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 6 01:43:25.487375 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 6 01:43:25.487395 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 6 01:43:25.487414 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 6 01:43:25.487489 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 6 01:43:25.487512 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 6 01:43:25.487531 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 6 01:43:25.487551 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 6 01:43:25.487578 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 6 01:43:25.487599 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 6 01:43:25.487618 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 6 01:43:25.487637 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 6 01:43:25.487656 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 6 01:43:25.487676 systemd[1]: Stopped systemd-fsck-usr.service. Mar 6 01:43:25.487694 kernel: ACPI: bus type drm_connector registered Mar 6 01:43:25.487713 kernel: fuse: init (API version 7.39) Mar 6 01:43:25.487736 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 6 01:43:25.487759 kernel: loop: module loaded Mar 6 01:43:25.487779 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 6 01:43:25.487798 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 6 01:43:25.487819 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 6 01:43:25.487838 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 6 01:43:25.487858 systemd[1]: verity-setup.service: Deactivated successfully. Mar 6 01:43:25.492276 systemd[1]: Stopped verity-setup.service. Mar 6 01:43:25.492301 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 01:43:25.492350 systemd-journald[1138]: Collecting audit messages is disabled. Mar 6 01:43:25.492395 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 6 01:43:25.492417 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 6 01:43:25.492497 systemd[1]: Mounted media.mount - External Media Directory. Mar 6 01:43:25.492524 systemd-journald[1138]: Journal started Mar 6 01:43:25.492558 systemd-journald[1138]: Runtime Journal (/run/log/journal/26443e78b2e346bdad50c32af5749fb0) is 6.0M, max 48.3M, 42.2M free. Mar 6 01:43:24.883821 systemd[1]: Queued start job for default target multi-user.target. Mar 6 01:43:24.912595 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 6 01:43:24.913301 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 6 01:43:24.913784 systemd[1]: systemd-journald.service: Consumed 1.463s CPU time. Mar 6 01:43:25.498565 systemd[1]: Started systemd-journald.service - Journal Service. Mar 6 01:43:25.505798 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 6 01:43:25.510258 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 6 01:43:25.514820 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 6 01:43:25.530684 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 6 01:43:25.534645 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 6 01:43:25.538729 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 6 01:43:25.539062 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 6 01:43:25.543101 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 6 01:43:25.543394 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 6 01:43:25.548519 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 6 01:43:25.548797 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 6 01:43:25.552388 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 6 01:43:25.552740 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 6 01:43:25.557341 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 6 01:43:25.557699 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 6 01:43:25.561519 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 6 01:43:25.561731 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 6 01:43:25.565985 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 6 01:43:25.569765 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 6 01:43:25.574400 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 6 01:43:25.590728 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 6 01:43:25.608641 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 6 01:43:25.613531 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 6 01:43:25.618757 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 6 01:43:25.618816 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 6 01:43:25.624969 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 6 01:43:25.638764 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 6 01:43:25.644556 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 6 01:43:25.648243 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 6 01:43:25.650507 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 6 01:43:25.654990 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 6 01:43:25.658528 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 6 01:43:25.660299 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 6 01:43:25.664228 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 6 01:43:25.668507 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 6 01:43:25.672013 systemd-journald[1138]: Time spent on flushing to /var/log/journal/26443e78b2e346bdad50c32af5749fb0 is 22.133ms for 983 entries. Mar 6 01:43:25.672013 systemd-journald[1138]: System Journal (/var/log/journal/26443e78b2e346bdad50c32af5749fb0) is 8.0M, max 195.6M, 187.6M free. Mar 6 01:43:25.736837 systemd-journald[1138]: Received client request to flush runtime journal. Mar 6 01:43:25.684566 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 6 01:43:25.689728 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 6 01:43:25.694820 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 6 01:43:25.698759 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 6 01:43:25.702695 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 6 01:43:25.706971 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 6 01:43:25.710877 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 6 01:43:25.723971 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 6 01:43:25.735688 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 6 01:43:25.753862 kernel: loop0: detected capacity change from 0 to 219192 Mar 6 01:43:25.815139 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 6 01:43:26.029606 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 6 01:43:26.036647 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 6 01:43:26.046524 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 6 01:43:26.055824 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 6 01:43:26.057964 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 6 01:43:26.106623 kernel: loop1: detected capacity change from 0 to 140768 Mar 6 01:43:26.106785 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 6 01:43:26.146803 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 6 01:43:26.151535 udevadm[1179]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 6 01:43:26.158528 kernel: loop2: detected capacity change from 0 to 142488 Mar 6 01:43:26.213766 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Mar 6 01:43:26.213798 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Mar 6 01:43:26.225656 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 6 01:43:26.243635 kernel: loop3: detected capacity change from 0 to 219192 Mar 6 01:43:26.277476 kernel: loop4: detected capacity change from 0 to 140768 Mar 6 01:43:26.301126 kernel: loop5: detected capacity change from 0 to 142488 Mar 6 01:43:26.330524 (sd-merge)[1192]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 6 01:43:26.331313 (sd-merge)[1192]: Merged extensions into '/usr'. Mar 6 01:43:26.337077 systemd[1]: Reloading requested from client PID 1168 ('systemd-sysext') (unit systemd-sysext.service)... Mar 6 01:43:26.337177 systemd[1]: Reloading... Mar 6 01:43:26.513517 zram_generator::config[1215]: No configuration found. Mar 6 01:43:26.648700 ldconfig[1163]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 6 01:43:26.704321 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 6 01:43:26.749720 systemd[1]: Reloading finished in 411 ms. Mar 6 01:43:26.800091 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 6 01:43:26.805201 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 6 01:43:26.810205 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 6 01:43:26.842830 systemd[1]: Starting ensure-sysext.service... Mar 6 01:43:26.846921 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 6 01:43:26.852155 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 6 01:43:26.858837 systemd[1]: Reloading requested from client PID 1256 ('systemctl') (unit ensure-sysext.service)... Mar 6 01:43:26.858883 systemd[1]: Reloading... Mar 6 01:43:26.879548 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 6 01:43:26.880154 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 6 01:43:26.881520 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 6 01:43:26.881842 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Mar 6 01:43:26.881945 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Mar 6 01:43:26.886252 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. Mar 6 01:43:26.886349 systemd-tmpfiles[1257]: Skipping /boot Mar 6 01:43:26.901190 systemd-udevd[1258]: Using default interface naming scheme 'v255'. Mar 6 01:43:26.901790 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. Mar 6 01:43:26.901798 systemd-tmpfiles[1257]: Skipping /boot Mar 6 01:43:26.938492 zram_generator::config[1287]: No configuration found. Mar 6 01:43:26.999816 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1294) Mar 6 01:43:27.065795 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 6 01:43:27.073742 kernel: ACPI: button: Power Button [PWRF] Mar 6 01:43:27.092557 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Mar 6 01:43:27.092899 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 6 01:43:27.113479 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 6 01:43:27.121823 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 6 01:43:27.122230 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 6 01:43:27.123894 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 6 01:43:27.162558 kernel: mousedev: PS/2 mouse device common for all mice Mar 6 01:43:27.194528 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 6 01:43:27.199552 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 6 01:43:27.199767 systemd[1]: Reloading finished in 340 ms. Mar 6 01:43:27.273220 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 6 01:43:27.280939 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 6 01:43:27.296683 kernel: kvm_amd: TSC scaling supported Mar 6 01:43:27.296805 kernel: kvm_amd: Nested Virtualization enabled Mar 6 01:43:27.297191 kernel: kvm_amd: Nested Paging enabled Mar 6 01:43:27.297278 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 6 01:43:27.297335 kernel: kvm_amd: PMU virtualization is disabled Mar 6 01:43:27.365573 kernel: EDAC MC: Ver: 3.0.0 Mar 6 01:43:27.370654 systemd[1]: Finished ensure-sysext.service. Mar 6 01:43:27.386188 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 01:43:27.402681 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 6 01:43:27.409483 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 6 01:43:27.414605 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 6 01:43:27.423664 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 6 01:43:27.429252 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 6 01:43:27.436673 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 6 01:43:27.443964 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 6 01:43:27.447800 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 6 01:43:27.451684 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 6 01:43:27.458684 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 6 01:43:27.466386 augenrules[1377]: No rules Mar 6 01:43:27.467005 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 6 01:43:27.471594 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 6 01:43:27.474257 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 6 01:43:27.482752 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 6 01:43:27.489875 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 01:43:27.494425 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 01:43:27.496978 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 6 01:43:27.502410 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 6 01:43:27.506637 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 6 01:43:27.506842 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 6 01:43:27.512215 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 6 01:43:27.512620 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 6 01:43:27.517650 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 6 01:43:27.517874 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 6 01:43:27.522405 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 6 01:43:27.522662 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 6 01:43:27.527173 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 6 01:43:27.533730 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 6 01:43:27.540830 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 6 01:43:27.565861 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 6 01:43:27.571132 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 6 01:43:27.571253 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 6 01:43:27.573322 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 6 01:43:27.580367 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 6 01:43:27.582899 lvm[1400]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 6 01:43:27.584797 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 6 01:43:27.586013 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 6 01:43:27.592981 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 01:43:27.599587 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 6 01:43:27.618113 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 6 01:43:27.624425 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 6 01:43:27.634924 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 6 01:43:27.647503 lvm[1408]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 6 01:43:27.646227 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 6 01:43:27.690129 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 6 01:43:27.728328 systemd-networkd[1383]: lo: Link UP Mar 6 01:43:27.728361 systemd-networkd[1383]: lo: Gained carrier Mar 6 01:43:27.730294 systemd-networkd[1383]: Enumeration completed Mar 6 01:43:27.730424 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 6 01:43:27.732899 systemd-networkd[1383]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 01:43:27.732942 systemd-networkd[1383]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 6 01:43:27.734930 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 6 01:43:27.737064 systemd-networkd[1383]: eth0: Link UP Mar 6 01:43:27.737081 systemd-networkd[1383]: eth0: Gained carrier Mar 6 01:43:27.737098 systemd-networkd[1383]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 01:43:27.740346 systemd[1]: Reached target time-set.target - System Time Set. Mar 6 01:43:27.749335 systemd-resolved[1384]: Positive Trust Anchors: Mar 6 01:43:27.749350 systemd-resolved[1384]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 6 01:43:27.749396 systemd-resolved[1384]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 6 01:43:27.754548 systemd-networkd[1383]: eth0: DHCPv4 address 10.0.0.105/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 6 01:43:27.754685 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 6 01:43:27.754765 systemd-resolved[1384]: Defaulting to hostname 'linux'. Mar 6 01:43:27.755758 systemd-timesyncd[1385]: Network configuration changed, trying to establish connection. Mar 6 01:43:27.757680 systemd-timesyncd[1385]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 6 01:43:27.757754 systemd-timesyncd[1385]: Initial clock synchronization to Fri 2026-03-06 01:43:28.002107 UTC. Mar 6 01:43:27.759967 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 6 01:43:27.765008 systemd[1]: Reached target network.target - Network. Mar 6 01:43:27.769122 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 6 01:43:27.774201 systemd[1]: Reached target sysinit.target - System Initialization. Mar 6 01:43:27.778987 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 6 01:43:27.784250 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 6 01:43:27.789992 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 6 01:43:27.794750 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 6 01:43:27.800256 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 6 01:43:27.805945 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 6 01:43:27.806017 systemd[1]: Reached target paths.target - Path Units. Mar 6 01:43:27.809941 systemd[1]: Reached target timers.target - Timer Units. Mar 6 01:43:27.815015 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 6 01:43:27.822187 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 6 01:43:27.838403 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 6 01:43:27.843809 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 6 01:43:27.848754 systemd[1]: Reached target sockets.target - Socket Units. Mar 6 01:43:27.852887 systemd[1]: Reached target basic.target - Basic System. Mar 6 01:43:27.856956 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 6 01:43:27.857055 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 6 01:43:27.865609 systemd[1]: Starting containerd.service - containerd container runtime... Mar 6 01:43:27.870246 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 6 01:43:27.874558 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 6 01:43:27.879072 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 6 01:43:27.882276 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 6 01:43:27.883607 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 6 01:43:27.884963 jq[1425]: false Mar 6 01:43:27.888253 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 6 01:43:27.892545 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 6 01:43:27.901632 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 6 01:43:27.905694 extend-filesystems[1426]: Found loop3 Mar 6 01:43:27.908131 extend-filesystems[1426]: Found loop4 Mar 6 01:43:27.908131 extend-filesystems[1426]: Found loop5 Mar 6 01:43:27.908131 extend-filesystems[1426]: Found sr0 Mar 6 01:43:27.908131 extend-filesystems[1426]: Found vda Mar 6 01:43:27.908131 extend-filesystems[1426]: Found vda1 Mar 6 01:43:27.908131 extend-filesystems[1426]: Found vda2 Mar 6 01:43:27.908131 extend-filesystems[1426]: Found vda3 Mar 6 01:43:27.908131 extend-filesystems[1426]: Found usr Mar 6 01:43:27.908131 extend-filesystems[1426]: Found vda4 Mar 6 01:43:27.908131 extend-filesystems[1426]: Found vda6 Mar 6 01:43:27.908131 extend-filesystems[1426]: Found vda7 Mar 6 01:43:27.908131 extend-filesystems[1426]: Found vda9 Mar 6 01:43:27.908131 extend-filesystems[1426]: Checking size of /dev/vda9 Mar 6 01:43:27.941796 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 6 01:43:27.915242 dbus-daemon[1424]: [system] SELinux support is enabled Mar 6 01:43:27.942262 extend-filesystems[1426]: Resized partition /dev/vda9 Mar 6 01:43:27.954188 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1293) Mar 6 01:43:27.954581 extend-filesystems[1441]: resize2fs 1.47.1 (20-May-2024) Mar 6 01:43:27.943227 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 6 01:43:27.963621 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 6 01:43:27.964162 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 6 01:43:27.970776 systemd[1]: Starting update-engine.service - Update Engine... Mar 6 01:43:27.985744 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 6 01:43:27.991215 jq[1448]: true Mar 6 01:43:28.011797 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 6 01:43:28.011894 update_engine[1446]: I20260306 01:43:28.009565 1446 main.cc:92] Flatcar Update Engine starting Mar 6 01:43:28.011894 update_engine[1446]: I20260306 01:43:28.011394 1446 update_check_scheduler.cc:74] Next update check in 8m39s Mar 6 01:43:27.995879 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 6 01:43:28.003374 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 6 01:43:28.003659 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 6 01:43:28.003997 systemd[1]: motdgen.service: Deactivated successfully. Mar 6 01:43:28.004221 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 6 01:43:28.011922 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 6 01:43:28.013204 extend-filesystems[1441]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 6 01:43:28.013204 extend-filesystems[1441]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 6 01:43:28.013204 extend-filesystems[1441]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 6 01:43:28.012177 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 6 01:43:28.029641 extend-filesystems[1426]: Resized filesystem in /dev/vda9 Mar 6 01:43:28.017724 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 6 01:43:28.018274 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 6 01:43:28.025428 systemd-logind[1442]: Watching system buttons on /dev/input/event1 (Power Button) Mar 6 01:43:28.025517 systemd-logind[1442]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 6 01:43:28.029226 systemd-logind[1442]: New seat seat0. Mar 6 01:43:28.037673 systemd[1]: Started systemd-logind.service - User Login Management. Mar 6 01:43:28.051741 (ntainerd)[1453]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 6 01:43:28.060781 jq[1452]: true Mar 6 01:43:28.064065 dbus-daemon[1424]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 6 01:43:28.070038 tar[1450]: linux-amd64/LICENSE Mar 6 01:43:28.072805 tar[1450]: linux-amd64/helm Mar 6 01:43:28.088716 systemd[1]: Started update-engine.service - Update Engine. Mar 6 01:43:28.112053 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 6 01:43:28.116445 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 6 01:43:28.124807 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 6 01:43:28.124990 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 6 01:43:28.168968 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 6 01:43:28.194561 bash[1478]: Updated "/home/core/.ssh/authorized_keys" Mar 6 01:43:28.201989 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 6 01:43:28.217178 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 6 01:43:28.258507 sshd_keygen[1445]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 6 01:43:28.261605 locksmithd[1479]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 6 01:43:28.336387 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 6 01:43:28.360670 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 6 01:43:28.397634 systemd[1]: issuegen.service: Deactivated successfully. Mar 6 01:43:28.397981 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 6 01:43:28.417111 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 6 01:43:28.454914 containerd[1453]: time="2026-03-06T01:43:28.454648455Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 6 01:43:28.507277 kernel: hrtimer: interrupt took 5706270 ns Mar 6 01:43:29.118756 containerd[1453]: time="2026-03-06T01:43:29.118562744Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 6 01:43:29.131246 containerd[1453]: time="2026-03-06T01:43:29.131043670Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 6 01:43:29.131246 containerd[1453]: time="2026-03-06T01:43:29.131173808Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 6 01:43:29.131246 containerd[1453]: time="2026-03-06T01:43:29.131214285Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 6 01:43:29.131786 containerd[1453]: time="2026-03-06T01:43:29.131761023Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 6 01:43:29.131869 containerd[1453]: time="2026-03-06T01:43:29.131787274Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 6 01:43:29.131972 containerd[1453]: time="2026-03-06T01:43:29.131901118Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 6 01:43:29.132035 containerd[1453]: time="2026-03-06T01:43:29.131970109Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 6 01:43:29.132412 containerd[1453]: time="2026-03-06T01:43:29.132340317Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 6 01:43:29.132412 containerd[1453]: time="2026-03-06T01:43:29.132402902Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 6 01:43:29.132549 containerd[1453]: time="2026-03-06T01:43:29.132427044Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 6 01:43:29.132549 containerd[1453]: time="2026-03-06T01:43:29.132442021Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 6 01:43:29.132739 containerd[1453]: time="2026-03-06T01:43:29.132673307Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 6 01:43:29.133137 containerd[1453]: time="2026-03-06T01:43:29.133069643Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 6 01:43:29.133444 containerd[1453]: time="2026-03-06T01:43:29.133374109Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 6 01:43:29.133444 containerd[1453]: time="2026-03-06T01:43:29.133429647Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 6 01:43:29.133717 containerd[1453]: time="2026-03-06T01:43:29.133649525Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 6 01:43:29.133823 containerd[1453]: time="2026-03-06T01:43:29.133779888Z" level=info msg="metadata content store policy set" policy=shared Mar 6 01:43:29.142144 containerd[1453]: time="2026-03-06T01:43:29.142079400Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 6 01:43:29.142244 containerd[1453]: time="2026-03-06T01:43:29.142164811Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 6 01:43:29.142244 containerd[1453]: time="2026-03-06T01:43:29.142200300Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 6 01:43:29.142244 containerd[1453]: time="2026-03-06T01:43:29.142223065Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 6 01:43:29.142443 containerd[1453]: time="2026-03-06T01:43:29.142242393Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 6 01:43:29.142692 containerd[1453]: time="2026-03-06T01:43:29.142602530Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 6 01:43:29.143034 containerd[1453]: time="2026-03-06T01:43:29.142961131Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 6 01:43:29.143227 containerd[1453]: time="2026-03-06T01:43:29.143169910Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 6 01:43:29.143272 containerd[1453]: time="2026-03-06T01:43:29.143224368Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 6 01:43:29.143272 containerd[1453]: time="2026-03-06T01:43:29.143247132Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 6 01:43:29.143330 containerd[1453]: time="2026-03-06T01:43:29.143269115Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 6 01:43:29.143330 containerd[1453]: time="2026-03-06T01:43:29.143289853Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 6 01:43:29.143330 containerd[1453]: time="2026-03-06T01:43:29.143310837Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 6 01:43:29.143563 containerd[1453]: time="2026-03-06T01:43:29.143331236Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 6 01:43:29.143563 containerd[1453]: time="2026-03-06T01:43:29.143352447Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 6 01:43:29.143563 containerd[1453]: time="2026-03-06T01:43:29.143372208Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 6 01:43:29.143563 containerd[1453]: time="2026-03-06T01:43:29.143390241Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 6 01:43:29.143563 containerd[1453]: time="2026-03-06T01:43:29.143407295Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 6 01:43:29.143563 containerd[1453]: time="2026-03-06T01:43:29.143516499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 6 01:43:29.143563 containerd[1453]: time="2026-03-06T01:43:29.143543924Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 6 01:43:29.143563 containerd[1453]: time="2026-03-06T01:43:29.143561740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 6 01:43:29.144236 containerd[1453]: time="2026-03-06T01:43:29.143579711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 6 01:43:29.144236 containerd[1453]: time="2026-03-06T01:43:29.143631422Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 6 01:43:29.144236 containerd[1453]: time="2026-03-06T01:43:29.143649279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 6 01:43:29.144236 containerd[1453]: time="2026-03-06T01:43:29.143676437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 6 01:43:29.144236 containerd[1453]: time="2026-03-06T01:43:29.143693636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 6 01:43:29.144236 containerd[1453]: time="2026-03-06T01:43:29.143710156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 6 01:43:29.144236 containerd[1453]: time="2026-03-06T01:43:29.143733774Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 6 01:43:29.144236 containerd[1453]: time="2026-03-06T01:43:29.143749256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 6 01:43:29.144236 containerd[1453]: time="2026-03-06T01:43:29.143765468Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 6 01:43:29.144236 containerd[1453]: time="2026-03-06T01:43:29.143780661Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 6 01:43:29.144236 containerd[1453]: time="2026-03-06T01:43:29.143799784Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 6 01:43:29.144236 containerd[1453]: time="2026-03-06T01:43:29.143828854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 6 01:43:29.144236 containerd[1453]: time="2026-03-06T01:43:29.143846125Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 6 01:43:29.144236 containerd[1453]: time="2026-03-06T01:43:29.143860249Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 6 01:43:29.144751 containerd[1453]: time="2026-03-06T01:43:29.143922072Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 6 01:43:29.144751 containerd[1453]: time="2026-03-06T01:43:29.143944044Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 6 01:43:29.144751 containerd[1453]: time="2026-03-06T01:43:29.143960750Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 6 01:43:29.144751 containerd[1453]: time="2026-03-06T01:43:29.143975687Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 6 01:43:29.144751 containerd[1453]: time="2026-03-06T01:43:29.143988833Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 6 01:43:29.144751 containerd[1453]: time="2026-03-06T01:43:29.144004551Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 6 01:43:29.144751 containerd[1453]: time="2026-03-06T01:43:29.144017348Z" level=info msg="NRI interface is disabled by configuration." Mar 6 01:43:29.144751 containerd[1453]: time="2026-03-06T01:43:29.144044186Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 6 01:43:29.145064 containerd[1453]: time="2026-03-06T01:43:29.144439740Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 6 01:43:29.145064 containerd[1453]: time="2026-03-06T01:43:29.144614099Z" level=info msg="Connect containerd service" Mar 6 01:43:29.145064 containerd[1453]: time="2026-03-06T01:43:29.144660729Z" level=info msg="using legacy CRI server" Mar 6 01:43:29.145064 containerd[1453]: time="2026-03-06T01:43:29.144669348Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 6 01:43:29.145064 containerd[1453]: time="2026-03-06T01:43:29.144808733Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 6 01:43:29.145898 containerd[1453]: time="2026-03-06T01:43:29.145859672Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 6 01:43:29.146741 containerd[1453]: time="2026-03-06T01:43:29.146307925Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 6 01:43:29.146741 containerd[1453]: time="2026-03-06T01:43:29.146375950Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 6 01:43:29.146914 containerd[1453]: time="2026-03-06T01:43:29.146336216Z" level=info msg="Start subscribing containerd event" Mar 6 01:43:29.147245 containerd[1453]: time="2026-03-06T01:43:29.147033128Z" level=info msg="Start recovering state" Mar 6 01:43:29.147387 containerd[1453]: time="2026-03-06T01:43:29.147368094Z" level=info msg="Start event monitor" Mar 6 01:43:29.147687 containerd[1453]: time="2026-03-06T01:43:29.147664833Z" level=info msg="Start snapshots syncer" Mar 6 01:43:29.147908 containerd[1453]: time="2026-03-06T01:43:29.147806368Z" level=info msg="Start cni network conf syncer for default" Mar 6 01:43:29.147985 containerd[1453]: time="2026-03-06T01:43:29.147967179Z" level=info msg="Start streaming server" Mar 6 01:43:29.491316 systemd[1]: Started containerd.service - containerd container runtime. Mar 6 01:43:29.495171 containerd[1453]: time="2026-03-06T01:43:29.495138228Z" level=info msg="containerd successfully booted in 1.054976s" Mar 6 01:43:29.908782 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 6 01:43:30.206106 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 6 01:43:30.213992 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 6 01:43:30.218116 systemd[1]: Reached target getty.target - Login Prompts. Mar 6 01:43:30.241757 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 6 01:43:30.256010 systemd[1]: Started sshd@0-10.0.0.105:22-10.0.0.1:45554.service - OpenSSH per-connection server daemon (10.0.0.1:45554). Mar 6 01:43:30.361855 sshd[1513]: Accepted publickey for core from 10.0.0.1 port 45554 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:43:30.365638 sshd[1513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:43:30.380944 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 6 01:43:30.415587 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 6 01:43:30.428438 systemd-logind[1442]: New session 1 of user core. Mar 6 01:43:30.529821 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 6 01:43:30.546105 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 6 01:43:30.576104 (systemd)[1517]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 6 01:43:30.906691 tar[1450]: linux-amd64/README.md Mar 6 01:43:30.927800 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 6 01:43:30.933414 systemd[1517]: Queued start job for default target default.target. Mar 6 01:43:30.935042 systemd[1517]: Created slice app.slice - User Application Slice. Mar 6 01:43:30.935096 systemd[1517]: Reached target paths.target - Paths. Mar 6 01:43:30.935110 systemd[1517]: Reached target timers.target - Timers. Mar 6 01:43:30.937715 systemd[1517]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 6 01:43:30.980669 systemd[1517]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 6 01:43:30.980904 systemd[1517]: Reached target sockets.target - Sockets. Mar 6 01:43:30.980943 systemd[1517]: Reached target basic.target - Basic System. Mar 6 01:43:30.981019 systemd[1517]: Reached target default.target - Main User Target. Mar 6 01:43:30.981063 systemd[1517]: Startup finished in 365ms. Mar 6 01:43:30.981276 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 6 01:43:31.002811 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 6 01:43:31.079999 systemd[1]: Started sshd@1-10.0.0.105:22-10.0.0.1:58046.service - OpenSSH per-connection server daemon (10.0.0.1:58046). Mar 6 01:43:31.172391 sshd[1531]: Accepted publickey for core from 10.0.0.1 port 58046 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:43:31.176643 sshd[1531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:43:31.186863 systemd-logind[1442]: New session 2 of user core. Mar 6 01:43:31.196848 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 6 01:43:31.253131 systemd-networkd[1383]: eth0: Gained IPv6LL Mar 6 01:43:31.258202 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 6 01:43:31.264227 systemd[1]: Reached target network-online.target - Network is Online. Mar 6 01:43:31.280940 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 6 01:43:31.287828 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:43:31.294883 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 6 01:43:31.319748 sshd[1531]: pam_unix(sshd:session): session closed for user core Mar 6 01:43:31.331261 systemd[1]: Started sshd@2-10.0.0.105:22-10.0.0.1:58050.service - OpenSSH per-connection server daemon (10.0.0.1:58050). Mar 6 01:43:31.337963 systemd[1]: sshd@1-10.0.0.105:22-10.0.0.1:58046.service: Deactivated successfully. Mar 6 01:43:31.345388 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 6 01:43:31.351318 systemd[1]: session-2.scope: Deactivated successfully. Mar 6 01:43:31.354302 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 6 01:43:31.354836 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 6 01:43:31.363655 systemd-logind[1442]: Session 2 logged out. Waiting for processes to exit. Mar 6 01:43:31.367963 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 6 01:43:31.369326 systemd-logind[1442]: Removed session 2. Mar 6 01:43:31.383563 sshd[1545]: Accepted publickey for core from 10.0.0.1 port 58050 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:43:31.388339 sshd[1545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:43:31.395710 systemd-logind[1442]: New session 3 of user core. Mar 6 01:43:31.401839 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 6 01:43:31.781575 sshd[1545]: pam_unix(sshd:session): session closed for user core Mar 6 01:43:31.787710 systemd[1]: sshd@2-10.0.0.105:22-10.0.0.1:58050.service: Deactivated successfully. Mar 6 01:43:31.791940 systemd[1]: session-3.scope: Deactivated successfully. Mar 6 01:43:31.795343 systemd-logind[1442]: Session 3 logged out. Waiting for processes to exit. Mar 6 01:43:31.797843 systemd-logind[1442]: Removed session 3. Mar 6 01:43:34.777281 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:43:34.783398 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 6 01:43:34.789923 (kubelet)[1566]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 01:43:34.799827 systemd[1]: Startup finished in 1.234s (kernel) + 9.382s (initrd) + 10.646s (userspace) = 21.263s. Mar 6 01:43:36.694608 kubelet[1566]: E0306 01:43:36.694245 1566 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 01:43:36.700357 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 01:43:36.700850 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 01:43:36.701510 systemd[1]: kubelet.service: Consumed 4.821s CPU time. Mar 6 01:43:41.904843 systemd[1]: Started sshd@3-10.0.0.105:22-10.0.0.1:59130.service - OpenSSH per-connection server daemon (10.0.0.1:59130). Mar 6 01:43:41.954742 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 59130 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:43:41.957035 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:43:41.962564 systemd-logind[1442]: New session 4 of user core. Mar 6 01:43:41.976750 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 6 01:43:42.037656 sshd[1580]: pam_unix(sshd:session): session closed for user core Mar 6 01:43:42.046038 systemd[1]: sshd@3-10.0.0.105:22-10.0.0.1:59130.service: Deactivated successfully. Mar 6 01:43:42.048399 systemd[1]: session-4.scope: Deactivated successfully. Mar 6 01:43:42.050996 systemd-logind[1442]: Session 4 logged out. Waiting for processes to exit. Mar 6 01:43:42.064945 systemd[1]: Started sshd@4-10.0.0.105:22-10.0.0.1:59146.service - OpenSSH per-connection server daemon (10.0.0.1:59146). Mar 6 01:43:42.066156 systemd-logind[1442]: Removed session 4. Mar 6 01:43:42.105226 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 59146 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:43:42.108083 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:43:42.116539 systemd-logind[1442]: New session 5 of user core. Mar 6 01:43:42.125672 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 6 01:43:42.183560 sshd[1587]: pam_unix(sshd:session): session closed for user core Mar 6 01:43:42.197515 systemd[1]: sshd@4-10.0.0.105:22-10.0.0.1:59146.service: Deactivated successfully. Mar 6 01:43:42.199861 systemd[1]: session-5.scope: Deactivated successfully. Mar 6 01:43:42.201949 systemd-logind[1442]: Session 5 logged out. Waiting for processes to exit. Mar 6 01:43:42.209940 systemd[1]: Started sshd@5-10.0.0.105:22-10.0.0.1:59150.service - OpenSSH per-connection server daemon (10.0.0.1:59150). Mar 6 01:43:42.212066 systemd-logind[1442]: Removed session 5. Mar 6 01:43:42.252176 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 59150 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:43:42.254609 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:43:42.260311 systemd-logind[1442]: New session 6 of user core. Mar 6 01:43:42.269856 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 6 01:43:42.339162 sshd[1594]: pam_unix(sshd:session): session closed for user core Mar 6 01:43:42.357573 systemd[1]: sshd@5-10.0.0.105:22-10.0.0.1:59150.service: Deactivated successfully. Mar 6 01:43:42.359728 systemd[1]: session-6.scope: Deactivated successfully. Mar 6 01:43:42.361379 systemd-logind[1442]: Session 6 logged out. Waiting for processes to exit. Mar 6 01:43:42.362743 systemd[1]: Started sshd@6-10.0.0.105:22-10.0.0.1:59158.service - OpenSSH per-connection server daemon (10.0.0.1:59158). Mar 6 01:43:42.364312 systemd-logind[1442]: Removed session 6. Mar 6 01:43:42.434019 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 59158 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:43:42.436590 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:43:42.443207 systemd-logind[1442]: New session 7 of user core. Mar 6 01:43:42.456798 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 6 01:43:42.529277 sudo[1604]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 6 01:43:42.539701 sudo[1604]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 01:43:42.567119 sudo[1604]: pam_unix(sudo:session): session closed for user root Mar 6 01:43:42.570658 sshd[1601]: pam_unix(sshd:session): session closed for user core Mar 6 01:43:42.592546 systemd[1]: sshd@6-10.0.0.105:22-10.0.0.1:59158.service: Deactivated successfully. Mar 6 01:43:42.594553 systemd[1]: session-7.scope: Deactivated successfully. Mar 6 01:43:42.596713 systemd-logind[1442]: Session 7 logged out. Waiting for processes to exit. Mar 6 01:43:42.602834 systemd[1]: Started sshd@7-10.0.0.105:22-10.0.0.1:59174.service - OpenSSH per-connection server daemon (10.0.0.1:59174). Mar 6 01:43:42.604095 systemd-logind[1442]: Removed session 7. Mar 6 01:43:42.649367 sshd[1609]: Accepted publickey for core from 10.0.0.1 port 59174 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:43:42.651246 sshd[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:43:42.656876 systemd-logind[1442]: New session 8 of user core. Mar 6 01:43:42.670825 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 6 01:43:42.735189 sudo[1613]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 6 01:43:42.735637 sudo[1613]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 01:43:42.740893 sudo[1613]: pam_unix(sudo:session): session closed for user root Mar 6 01:43:42.749289 sudo[1612]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 6 01:43:42.749711 sudo[1612]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 01:43:42.775902 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 6 01:43:42.783974 auditctl[1616]: No rules Mar 6 01:43:42.784614 systemd[1]: audit-rules.service: Deactivated successfully. Mar 6 01:43:42.784938 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 6 01:43:42.788156 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 6 01:43:42.852976 augenrules[1634]: No rules Mar 6 01:43:42.855517 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 6 01:43:42.859075 sudo[1612]: pam_unix(sudo:session): session closed for user root Mar 6 01:43:42.863128 sshd[1609]: pam_unix(sshd:session): session closed for user core Mar 6 01:43:42.875152 systemd[1]: sshd@7-10.0.0.105:22-10.0.0.1:59174.service: Deactivated successfully. Mar 6 01:43:42.877061 systemd[1]: session-8.scope: Deactivated successfully. Mar 6 01:43:42.878714 systemd-logind[1442]: Session 8 logged out. Waiting for processes to exit. Mar 6 01:43:42.893944 systemd[1]: Started sshd@8-10.0.0.105:22-10.0.0.1:59180.service - OpenSSH per-connection server daemon (10.0.0.1:59180). Mar 6 01:43:42.895604 systemd-logind[1442]: Removed session 8. Mar 6 01:43:42.935955 sshd[1642]: Accepted publickey for core from 10.0.0.1 port 59180 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:43:42.937926 sshd[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:43:42.945236 systemd-logind[1442]: New session 9 of user core. Mar 6 01:43:42.962753 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 6 01:43:43.024547 sudo[1645]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 6 01:43:43.025107 sudo[1645]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 01:43:44.222953 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 6 01:43:44.224730 (dockerd)[1663]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 6 01:43:45.395403 dockerd[1663]: time="2026-03-06T01:43:45.395136999Z" level=info msg="Starting up" Mar 6 01:43:45.908350 dockerd[1663]: time="2026-03-06T01:43:45.908188361Z" level=info msg="Loading containers: start." Mar 6 01:43:46.154484 kernel: Initializing XFRM netlink socket Mar 6 01:43:46.281818 systemd-networkd[1383]: docker0: Link UP Mar 6 01:43:46.328519 dockerd[1663]: time="2026-03-06T01:43:46.322887519Z" level=info msg="Loading containers: done." Mar 6 01:43:46.374112 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1562959920-merged.mount: Deactivated successfully. Mar 6 01:43:46.375431 dockerd[1663]: time="2026-03-06T01:43:46.375335402Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 6 01:43:46.375653 dockerd[1663]: time="2026-03-06T01:43:46.375592228Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 6 01:43:46.375858 dockerd[1663]: time="2026-03-06T01:43:46.375790936Z" level=info msg="Daemon has completed initialization" Mar 6 01:43:46.455323 dockerd[1663]: time="2026-03-06T01:43:46.455041252Z" level=info msg="API listen on /run/docker.sock" Mar 6 01:43:46.455875 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 6 01:43:46.800541 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 6 01:43:46.817755 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:43:47.492585 containerd[1453]: time="2026-03-06T01:43:47.492544734Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 6 01:43:47.522554 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:43:47.523645 (kubelet)[1817]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 01:43:47.713184 kubelet[1817]: E0306 01:43:47.713067 1817 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 01:43:47.722530 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 01:43:47.722818 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 01:43:48.212961 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3056875482.mount: Deactivated successfully. Mar 6 01:43:53.443049 containerd[1453]: time="2026-03-06T01:43:53.442934645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:53.443799 containerd[1453]: time="2026-03-06T01:43:53.443744854Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=27074497" Mar 6 01:43:53.445301 containerd[1453]: time="2026-03-06T01:43:53.445156486Z" level=info msg="ImageCreate event name:\"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:53.448306 containerd[1453]: time="2026-03-06T01:43:53.448261563Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:53.449807 containerd[1453]: time="2026-03-06T01:43:53.449716047Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"27071096\" in 5.957129535s" Mar 6 01:43:53.449807 containerd[1453]: time="2026-03-06T01:43:53.449762945Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\"" Mar 6 01:43:53.454616 containerd[1453]: time="2026-03-06T01:43:53.454574334Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 6 01:43:56.040558 containerd[1453]: time="2026-03-06T01:43:56.040362237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:56.042514 containerd[1453]: time="2026-03-06T01:43:56.042360455Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=21165823" Mar 6 01:43:56.043893 containerd[1453]: time="2026-03-06T01:43:56.043837180Z" level=info msg="ImageCreate event name:\"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:56.048945 containerd[1453]: time="2026-03-06T01:43:56.048840820Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:56.050746 containerd[1453]: time="2026-03-06T01:43:56.050659667Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"22822771\" in 2.596034567s" Mar 6 01:43:56.050746 containerd[1453]: time="2026-03-06T01:43:56.050727693Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\"" Mar 6 01:43:56.058370 containerd[1453]: time="2026-03-06T01:43:56.058257922Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 6 01:43:57.814525 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 6 01:43:57.862760 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:43:58.890347 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:43:58.927276 (kubelet)[1900]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 01:43:59.265561 containerd[1453]: time="2026-03-06T01:43:59.257403510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:59.265561 containerd[1453]: time="2026-03-06T01:43:59.264055882Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=15729824" Mar 6 01:43:59.288598 containerd[1453]: time="2026-03-06T01:43:59.288239130Z" level=info msg="ImageCreate event name:\"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:59.377261 containerd[1453]: time="2026-03-06T01:43:59.377034440Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:59.386517 containerd[1453]: time="2026-03-06T01:43:59.384969104Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"17386790\" in 3.326619689s" Mar 6 01:43:59.386517 containerd[1453]: time="2026-03-06T01:43:59.385058354Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\"" Mar 6 01:43:59.392243 containerd[1453]: time="2026-03-06T01:43:59.392201445Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 6 01:43:59.751160 kubelet[1900]: E0306 01:43:59.750799 1900 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 01:43:59.768335 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 01:43:59.768788 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 01:43:59.770185 systemd[1]: kubelet.service: Consumed 1.242s CPU time. Mar 6 01:44:01.489902 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4234536149.mount: Deactivated successfully. Mar 6 01:44:02.213623 containerd[1453]: time="2026-03-06T01:44:02.211991454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:44:02.216118 containerd[1453]: time="2026-03-06T01:44:02.215945659Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=25861770" Mar 6 01:44:02.218882 containerd[1453]: time="2026-03-06T01:44:02.218623632Z" level=info msg="ImageCreate event name:\"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:44:02.224742 containerd[1453]: time="2026-03-06T01:44:02.224524954Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:44:02.225643 containerd[1453]: time="2026-03-06T01:44:02.225535009Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"25860789\" in 2.832952008s" Mar 6 01:44:02.225643 containerd[1453]: time="2026-03-06T01:44:02.225600732Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\"" Mar 6 01:44:02.230975 containerd[1453]: time="2026-03-06T01:44:02.230689421Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 6 01:44:02.925671 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3545432478.mount: Deactivated successfully. Mar 6 01:44:05.720947 containerd[1453]: time="2026-03-06T01:44:05.720530254Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:44:05.722588 containerd[1453]: time="2026-03-06T01:44:05.722476211Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Mar 6 01:44:05.724175 containerd[1453]: time="2026-03-06T01:44:05.724088444Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:44:05.728599 containerd[1453]: time="2026-03-06T01:44:05.728497600Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:44:05.731783 containerd[1453]: time="2026-03-06T01:44:05.731676194Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 3.500941328s" Mar 6 01:44:05.731783 containerd[1453]: time="2026-03-06T01:44:05.731740047Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Mar 6 01:44:05.735866 containerd[1453]: time="2026-03-06T01:44:05.735687779Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 6 01:44:06.240225 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4203591746.mount: Deactivated successfully. Mar 6 01:44:06.250955 containerd[1453]: time="2026-03-06T01:44:06.250851227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:44:06.252336 containerd[1453]: time="2026-03-06T01:44:06.252237941Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 6 01:44:06.253887 containerd[1453]: time="2026-03-06T01:44:06.253819095Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:44:06.258673 containerd[1453]: time="2026-03-06T01:44:06.258540243Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:44:06.260158 containerd[1453]: time="2026-03-06T01:44:06.260024397Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 524.207979ms" Mar 6 01:44:06.260158 containerd[1453]: time="2026-03-06T01:44:06.260059265Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 6 01:44:06.264573 containerd[1453]: time="2026-03-06T01:44:06.264519251Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 6 01:44:07.111189 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3412853305.mount: Deactivated successfully. Mar 6 01:44:09.627054 containerd[1453]: time="2026-03-06T01:44:09.626906856Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:44:09.629015 containerd[1453]: time="2026-03-06T01:44:09.628356985Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22860674" Mar 6 01:44:09.630522 containerd[1453]: time="2026-03-06T01:44:09.630394753Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:44:09.635410 containerd[1453]: time="2026-03-06T01:44:09.635312440Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:44:09.636867 containerd[1453]: time="2026-03-06T01:44:09.636807259Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 3.372223997s" Mar 6 01:44:09.636916 containerd[1453]: time="2026-03-06T01:44:09.636869325Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Mar 6 01:44:09.798981 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 6 01:44:09.810851 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:44:10.044215 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:44:10.061047 (kubelet)[2067]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 01:44:10.128208 kubelet[2067]: E0306 01:44:10.128094 2067 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 01:44:10.131961 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 01:44:10.132223 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 01:44:13.558488 update_engine[1446]: I20260306 01:44:13.558038 1446 update_attempter.cc:509] Updating boot flags... Mar 6 01:44:13.628513 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2082) Mar 6 01:44:13.667505 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2085) Mar 6 01:44:14.666345 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:44:14.680804 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:44:14.713676 systemd[1]: Reloading requested from client PID 2096 ('systemctl') (unit session-9.scope)... Mar 6 01:44:14.713713 systemd[1]: Reloading... Mar 6 01:44:14.815543 zram_generator::config[2135]: No configuration found. Mar 6 01:44:14.932079 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 6 01:44:15.025927 systemd[1]: Reloading finished in 311 ms. Mar 6 01:44:15.084680 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 6 01:44:15.084852 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 6 01:44:15.085271 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:44:15.089548 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:44:15.275872 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:44:15.281012 (kubelet)[2184]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 6 01:44:15.420834 kubelet[2184]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 6 01:44:15.420834 kubelet[2184]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 6 01:44:15.421374 kubelet[2184]: I0306 01:44:15.420930 2184 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 6 01:44:16.537055 kubelet[2184]: I0306 01:44:16.536971 2184 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 6 01:44:16.537055 kubelet[2184]: I0306 01:44:16.537017 2184 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 6 01:44:16.538060 kubelet[2184]: I0306 01:44:16.538001 2184 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 6 01:44:16.538102 kubelet[2184]: I0306 01:44:16.538080 2184 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 6 01:44:16.538659 kubelet[2184]: I0306 01:44:16.538612 2184 server.go:956] "Client rotation is on, will bootstrap in background" Mar 6 01:44:16.644110 kubelet[2184]: I0306 01:44:16.643950 2184 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 6 01:44:16.644110 kubelet[2184]: E0306 01:44:16.644045 2184 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.105:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 6 01:44:16.653558 kubelet[2184]: E0306 01:44:16.653392 2184 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 6 01:44:16.653635 kubelet[2184]: I0306 01:44:16.653620 2184 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 6 01:44:16.666773 kubelet[2184]: I0306 01:44:16.666674 2184 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 6 01:44:16.671638 kubelet[2184]: I0306 01:44:16.671526 2184 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 6 01:44:16.674058 kubelet[2184]: I0306 01:44:16.671597 2184 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 6 01:44:16.674379 kubelet[2184]: I0306 01:44:16.674112 2184 topology_manager.go:138] "Creating topology manager with none policy" Mar 6 01:44:16.674379 kubelet[2184]: I0306 01:44:16.674132 2184 container_manager_linux.go:306] "Creating device plugin manager" Mar 6 01:44:16.674617 kubelet[2184]: I0306 01:44:16.674405 2184 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 6 01:44:16.677098 kubelet[2184]: I0306 01:44:16.677007 2184 state_mem.go:36] "Initialized new in-memory state store" Mar 6 01:44:16.677827 kubelet[2184]: I0306 01:44:16.677739 2184 kubelet.go:475] "Attempting to sync node with API server" Mar 6 01:44:16.677894 kubelet[2184]: I0306 01:44:16.677832 2184 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 6 01:44:16.678323 kubelet[2184]: I0306 01:44:16.678214 2184 kubelet.go:387] "Adding apiserver pod source" Mar 6 01:44:16.678654 kubelet[2184]: I0306 01:44:16.678384 2184 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 6 01:44:16.679738 kubelet[2184]: E0306 01:44:16.679608 2184 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 6 01:44:16.679956 kubelet[2184]: E0306 01:44:16.679608 2184 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 6 01:44:16.683044 kubelet[2184]: I0306 01:44:16.682968 2184 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 6 01:44:16.684153 kubelet[2184]: I0306 01:44:16.684060 2184 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 6 01:44:16.684207 kubelet[2184]: I0306 01:44:16.684159 2184 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 6 01:44:16.684485 kubelet[2184]: W0306 01:44:16.684403 2184 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 6 01:44:16.690778 kubelet[2184]: I0306 01:44:16.690718 2184 server.go:1262] "Started kubelet" Mar 6 01:44:16.692246 kubelet[2184]: I0306 01:44:16.691949 2184 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 6 01:44:16.692246 kubelet[2184]: I0306 01:44:16.692111 2184 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 6 01:44:16.692650 kubelet[2184]: I0306 01:44:16.692594 2184 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 6 01:44:16.692868 kubelet[2184]: I0306 01:44:16.692831 2184 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 6 01:44:16.695591 kubelet[2184]: I0306 01:44:16.693795 2184 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 6 01:44:16.695591 kubelet[2184]: I0306 01:44:16.695118 2184 server.go:310] "Adding debug handlers to kubelet server" Mar 6 01:44:16.697194 kubelet[2184]: E0306 01:44:16.695703 2184 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.105:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.105:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189a1d27018c5806 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-06 01:44:16.690616326 +0000 UTC m=+1.390326531,LastTimestamp:2026-03-06 01:44:16.690616326 +0000 UTC m=+1.390326531,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 6 01:44:16.698494 kubelet[2184]: I0306 01:44:16.697722 2184 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 6 01:44:16.698494 kubelet[2184]: E0306 01:44:16.697821 2184 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 01:44:16.698494 kubelet[2184]: I0306 01:44:16.697957 2184 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 6 01:44:16.698494 kubelet[2184]: I0306 01:44:16.698327 2184 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 6 01:44:16.699271 kubelet[2184]: I0306 01:44:16.699256 2184 reconciler.go:29] "Reconciler: start to sync state" Mar 6 01:44:16.700351 kubelet[2184]: E0306 01:44:16.700329 2184 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 6 01:44:16.701181 kubelet[2184]: E0306 01:44:16.701069 2184 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.105:6443: connect: connection refused" interval="200ms" Mar 6 01:44:16.702100 kubelet[2184]: E0306 01:44:16.702044 2184 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 6 01:44:16.702100 kubelet[2184]: I0306 01:44:16.702095 2184 factory.go:223] Registration of the systemd container factory successfully Mar 6 01:44:16.702266 kubelet[2184]: I0306 01:44:16.702213 2184 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 6 01:44:16.703787 kubelet[2184]: I0306 01:44:16.703749 2184 factory.go:223] Registration of the containerd container factory successfully Mar 6 01:44:16.809734 kubelet[2184]: E0306 01:44:16.809664 2184 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 01:44:16.812837 kubelet[2184]: I0306 01:44:16.812174 2184 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 6 01:44:16.821126 kubelet[2184]: I0306 01:44:16.820722 2184 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 6 01:44:16.821126 kubelet[2184]: I0306 01:44:16.821150 2184 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 6 01:44:16.827035 kubelet[2184]: I0306 01:44:16.826977 2184 kubelet.go:2428] "Starting kubelet main sync loop" Mar 6 01:44:16.827897 kubelet[2184]: E0306 01:44:16.827058 2184 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 6 01:44:16.830162 kubelet[2184]: E0306 01:44:16.828809 2184 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 6 01:44:16.895152 kubelet[2184]: I0306 01:44:16.895019 2184 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 6 01:44:16.895152 kubelet[2184]: I0306 01:44:16.895075 2184 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 6 01:44:16.895152 kubelet[2184]: I0306 01:44:16.895175 2184 state_mem.go:36] "Initialized new in-memory state store" Mar 6 01:44:16.905727 kubelet[2184]: I0306 01:44:16.905541 2184 policy_none.go:49] "None policy: Start" Mar 6 01:44:16.908852 kubelet[2184]: I0306 01:44:16.905946 2184 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 6 01:44:16.908852 kubelet[2184]: I0306 01:44:16.906994 2184 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 6 01:44:16.908852 kubelet[2184]: E0306 01:44:16.908603 2184 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.105:6443: connect: connection refused" interval="400ms" Mar 6 01:44:16.917151 kubelet[2184]: E0306 01:44:16.915415 2184 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 01:44:16.922224 kubelet[2184]: I0306 01:44:16.922117 2184 policy_none.go:47] "Start" Mar 6 01:44:16.930727 kubelet[2184]: E0306 01:44:16.930254 2184 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 6 01:44:16.951388 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 6 01:44:16.970720 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 6 01:44:16.976361 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 6 01:44:16.982943 kubelet[2184]: E0306 01:44:16.982830 2184 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 6 01:44:16.983185 kubelet[2184]: I0306 01:44:16.983174 2184 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 6 01:44:16.983266 kubelet[2184]: I0306 01:44:16.983206 2184 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 6 01:44:16.984271 kubelet[2184]: I0306 01:44:16.983762 2184 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 6 01:44:16.986186 kubelet[2184]: E0306 01:44:16.986032 2184 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 6 01:44:16.986186 kubelet[2184]: E0306 01:44:16.986168 2184 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 6 01:44:17.085069 kubelet[2184]: I0306 01:44:17.084935 2184 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 6 01:44:17.085643 kubelet[2184]: E0306 01:44:17.085593 2184 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.105:6443/api/v1/nodes\": dial tcp 10.0.0.105:6443: connect: connection refused" node="localhost" Mar 6 01:44:17.150624 systemd[1]: Created slice kubepods-burstable-pod3ac9f33d3563032bb7738778243081c2.slice - libcontainer container kubepods-burstable-pod3ac9f33d3563032bb7738778243081c2.slice. Mar 6 01:44:17.170014 kubelet[2184]: E0306 01:44:17.169912 2184 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:44:17.175046 systemd[1]: Created slice kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice - libcontainer container kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice. Mar 6 01:44:17.183917 kubelet[2184]: E0306 01:44:17.183837 2184 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:44:17.190909 systemd[1]: Created slice kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice - libcontainer container kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice. Mar 6 01:44:17.193905 kubelet[2184]: E0306 01:44:17.193800 2184 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:44:17.219704 kubelet[2184]: I0306 01:44:17.219484 2184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3ac9f33d3563032bb7738778243081c2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3ac9f33d3563032bb7738778243081c2\") " pod="kube-system/kube-apiserver-localhost" Mar 6 01:44:17.219704 kubelet[2184]: I0306 01:44:17.219644 2184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3ac9f33d3563032bb7738778243081c2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3ac9f33d3563032bb7738778243081c2\") " pod="kube-system/kube-apiserver-localhost" Mar 6 01:44:17.219704 kubelet[2184]: I0306 01:44:17.219717 2184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:44:17.219704 kubelet[2184]: I0306 01:44:17.219737 2184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:44:17.219704 kubelet[2184]: I0306 01:44:17.219759 2184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:44:17.220528 kubelet[2184]: I0306 01:44:17.219781 2184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:44:17.220528 kubelet[2184]: I0306 01:44:17.219800 2184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:44:17.220528 kubelet[2184]: I0306 01:44:17.219838 2184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 6 01:44:17.220528 kubelet[2184]: I0306 01:44:17.219894 2184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3ac9f33d3563032bb7738778243081c2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3ac9f33d3563032bb7738778243081c2\") " pod="kube-system/kube-apiserver-localhost" Mar 6 01:44:17.290181 kubelet[2184]: I0306 01:44:17.290062 2184 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 6 01:44:17.290837 kubelet[2184]: E0306 01:44:17.290776 2184 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.105:6443/api/v1/nodes\": dial tcp 10.0.0.105:6443: connect: connection refused" node="localhost" Mar 6 01:44:17.310254 kubelet[2184]: E0306 01:44:17.310213 2184 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.105:6443: connect: connection refused" interval="800ms" Mar 6 01:44:17.477996 kubelet[2184]: E0306 01:44:17.477654 2184 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:17.481142 containerd[1453]: time="2026-03-06T01:44:17.481035279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3ac9f33d3563032bb7738778243081c2,Namespace:kube-system,Attempt:0,}" Mar 6 01:44:17.488062 kubelet[2184]: E0306 01:44:17.487985 2184 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:17.488945 containerd[1453]: time="2026-03-06T01:44:17.488814076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,}" Mar 6 01:44:17.497587 kubelet[2184]: E0306 01:44:17.497396 2184 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:17.498282 containerd[1453]: time="2026-03-06T01:44:17.498112181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,}" Mar 6 01:44:17.589908 kubelet[2184]: E0306 01:44:17.589736 2184 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 6 01:44:17.694514 kubelet[2184]: I0306 01:44:17.694328 2184 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 6 01:44:17.694981 kubelet[2184]: E0306 01:44:17.694684 2184 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 6 01:44:17.694981 kubelet[2184]: E0306 01:44:17.694803 2184 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.105:6443/api/v1/nodes\": dial tcp 10.0.0.105:6443: connect: connection refused" node="localhost" Mar 6 01:44:17.732399 kubelet[2184]: E0306 01:44:17.732189 2184 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 6 01:44:17.849643 kubelet[2184]: E0306 01:44:17.849526 2184 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 6 01:44:17.955297 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1477725611.mount: Deactivated successfully. Mar 6 01:44:17.975764 containerd[1453]: time="2026-03-06T01:44:17.975594651Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 01:44:17.977000 containerd[1453]: time="2026-03-06T01:44:17.976806638Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 01:44:17.978318 containerd[1453]: time="2026-03-06T01:44:17.978219523Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 01:44:17.979872 containerd[1453]: time="2026-03-06T01:44:17.979762349Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 6 01:44:17.980989 containerd[1453]: time="2026-03-06T01:44:17.980859915Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 6 01:44:17.982100 containerd[1453]: time="2026-03-06T01:44:17.982057649Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 6 01:44:17.983841 containerd[1453]: time="2026-03-06T01:44:17.983648198Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 01:44:17.998500 containerd[1453]: time="2026-03-06T01:44:17.998218749Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 01:44:18.000244 containerd[1453]: time="2026-03-06T01:44:18.000117543Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 501.907499ms" Mar 6 01:44:18.008226 containerd[1453]: time="2026-03-06T01:44:18.008055021Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 519.080612ms" Mar 6 01:44:18.010664 containerd[1453]: time="2026-03-06T01:44:18.010505569Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 529.25322ms" Mar 6 01:44:18.111541 kubelet[2184]: E0306 01:44:18.111297 2184 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.105:6443: connect: connection refused" interval="1.6s" Mar 6 01:44:18.342202 containerd[1453]: time="2026-03-06T01:44:18.342078556Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:44:18.342202 containerd[1453]: time="2026-03-06T01:44:18.342135113Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:44:18.342202 containerd[1453]: time="2026-03-06T01:44:18.342148881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:44:18.342550 containerd[1453]: time="2026-03-06T01:44:18.342233796Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:44:18.344959 containerd[1453]: time="2026-03-06T01:44:18.344625565Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:44:18.344959 containerd[1453]: time="2026-03-06T01:44:18.344673394Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:44:18.344959 containerd[1453]: time="2026-03-06T01:44:18.344686460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:44:18.344959 containerd[1453]: time="2026-03-06T01:44:18.344764553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:44:18.421178 containerd[1453]: time="2026-03-06T01:44:18.420662433Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:44:18.421178 containerd[1453]: time="2026-03-06T01:44:18.420755665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:44:18.421178 containerd[1453]: time="2026-03-06T01:44:18.420766317Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:44:18.421178 containerd[1453]: time="2026-03-06T01:44:18.420930848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:44:18.449025 systemd[1]: Started cri-containerd-0ef64b70d7d5a5ba6368a12b3690077e88d5a169036cc9e27ffd9c241cf834e2.scope - libcontainer container 0ef64b70d7d5a5ba6368a12b3690077e88d5a169036cc9e27ffd9c241cf834e2. Mar 6 01:44:18.479723 systemd[1]: Started cri-containerd-1069983315ef6d89c6c2deaf3efb611f5f21c681acbdaa06b7b46d7b10bf47fb.scope - libcontainer container 1069983315ef6d89c6c2deaf3efb611f5f21c681acbdaa06b7b46d7b10bf47fb. Mar 6 01:44:18.499125 kubelet[2184]: I0306 01:44:18.499088 2184 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 6 01:44:18.500276 kubelet[2184]: E0306 01:44:18.500211 2184 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.105:6443/api/v1/nodes\": dial tcp 10.0.0.105:6443: connect: connection refused" node="localhost" Mar 6 01:44:18.514088 systemd[1]: Started cri-containerd-966c467ce7e534945cdd3319425c9b19dcf10ba988763f9e8d347936ba47ba0a.scope - libcontainer container 966c467ce7e534945cdd3319425c9b19dcf10ba988763f9e8d347936ba47ba0a. Mar 6 01:44:18.709288 containerd[1453]: time="2026-03-06T01:44:18.709089544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ef64b70d7d5a5ba6368a12b3690077e88d5a169036cc9e27ffd9c241cf834e2\"" Mar 6 01:44:18.711134 containerd[1453]: time="2026-03-06T01:44:18.711079152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3ac9f33d3563032bb7738778243081c2,Namespace:kube-system,Attempt:0,} returns sandbox id \"1069983315ef6d89c6c2deaf3efb611f5f21c681acbdaa06b7b46d7b10bf47fb\"" Mar 6 01:44:18.713039 kubelet[2184]: E0306 01:44:18.712746 2184 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:18.715967 kubelet[2184]: E0306 01:44:18.715913 2184 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:18.724896 containerd[1453]: time="2026-03-06T01:44:18.724667448Z" level=info msg="CreateContainer within sandbox \"0ef64b70d7d5a5ba6368a12b3690077e88d5a169036cc9e27ffd9c241cf834e2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 6 01:44:18.728508 containerd[1453]: time="2026-03-06T01:44:18.728216997Z" level=info msg="CreateContainer within sandbox \"1069983315ef6d89c6c2deaf3efb611f5f21c681acbdaa06b7b46d7b10bf47fb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 6 01:44:18.730147 containerd[1453]: time="2026-03-06T01:44:18.730114597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,} returns sandbox id \"966c467ce7e534945cdd3319425c9b19dcf10ba988763f9e8d347936ba47ba0a\"" Mar 6 01:44:18.731878 kubelet[2184]: E0306 01:44:18.731704 2184 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:18.738115 containerd[1453]: time="2026-03-06T01:44:18.738076022Z" level=info msg="CreateContainer within sandbox \"966c467ce7e534945cdd3319425c9b19dcf10ba988763f9e8d347936ba47ba0a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 6 01:44:18.771921 containerd[1453]: time="2026-03-06T01:44:18.771692351Z" level=info msg="CreateContainer within sandbox \"0ef64b70d7d5a5ba6368a12b3690077e88d5a169036cc9e27ffd9c241cf834e2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4728d6757901fae6a874c6cd814289506b82a3924075dffb4ed142d988636044\"" Mar 6 01:44:18.773206 containerd[1453]: time="2026-03-06T01:44:18.773034932Z" level=info msg="StartContainer for \"4728d6757901fae6a874c6cd814289506b82a3924075dffb4ed142d988636044\"" Mar 6 01:44:18.780796 containerd[1453]: time="2026-03-06T01:44:18.780625420Z" level=info msg="CreateContainer within sandbox \"1069983315ef6d89c6c2deaf3efb611f5f21c681acbdaa06b7b46d7b10bf47fb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"aa24f3f8a8a77aa049db90f797eaa18eb505c2ed1f8d4ef316cd11be4b876194\"" Mar 6 01:44:18.781399 containerd[1453]: time="2026-03-06T01:44:18.781353996Z" level=info msg="StartContainer for \"aa24f3f8a8a77aa049db90f797eaa18eb505c2ed1f8d4ef316cd11be4b876194\"" Mar 6 01:44:18.782935 containerd[1453]: time="2026-03-06T01:44:18.782806324Z" level=info msg="CreateContainer within sandbox \"966c467ce7e534945cdd3319425c9b19dcf10ba988763f9e8d347936ba47ba0a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a92fee3d6f41f85db6c39a1e3376b13f4da042ef8f19ce52ca69d88bc1c10f27\"" Mar 6 01:44:18.783816 containerd[1453]: time="2026-03-06T01:44:18.783760355Z" level=info msg="StartContainer for \"a92fee3d6f41f85db6c39a1e3376b13f4da042ef8f19ce52ca69d88bc1c10f27\"" Mar 6 01:44:18.822091 systemd[1]: Started cri-containerd-4728d6757901fae6a874c6cd814289506b82a3924075dffb4ed142d988636044.scope - libcontainer container 4728d6757901fae6a874c6cd814289506b82a3924075dffb4ed142d988636044. Mar 6 01:44:18.839755 systemd[1]: Started cri-containerd-a92fee3d6f41f85db6c39a1e3376b13f4da042ef8f19ce52ca69d88bc1c10f27.scope - libcontainer container a92fee3d6f41f85db6c39a1e3376b13f4da042ef8f19ce52ca69d88bc1c10f27. Mar 6 01:44:18.847509 kubelet[2184]: E0306 01:44:18.844924 2184 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.105:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 6 01:44:18.849042 systemd[1]: Started cri-containerd-aa24f3f8a8a77aa049db90f797eaa18eb505c2ed1f8d4ef316cd11be4b876194.scope - libcontainer container aa24f3f8a8a77aa049db90f797eaa18eb505c2ed1f8d4ef316cd11be4b876194. Mar 6 01:44:18.994425 containerd[1453]: time="2026-03-06T01:44:18.993171301Z" level=info msg="StartContainer for \"aa24f3f8a8a77aa049db90f797eaa18eb505c2ed1f8d4ef316cd11be4b876194\" returns successfully" Mar 6 01:44:19.038328 containerd[1453]: time="2026-03-06T01:44:19.038229988Z" level=info msg="StartContainer for \"4728d6757901fae6a874c6cd814289506b82a3924075dffb4ed142d988636044\" returns successfully" Mar 6 01:44:19.046521 containerd[1453]: time="2026-03-06T01:44:19.046386642Z" level=info msg="StartContainer for \"a92fee3d6f41f85db6c39a1e3376b13f4da042ef8f19ce52ca69d88bc1c10f27\" returns successfully" Mar 6 01:44:19.896719 kubelet[2184]: E0306 01:44:19.896625 2184 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:44:19.897687 kubelet[2184]: E0306 01:44:19.897042 2184 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:19.899491 kubelet[2184]: E0306 01:44:19.898003 2184 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:44:19.899491 kubelet[2184]: E0306 01:44:19.898192 2184 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:19.901822 kubelet[2184]: E0306 01:44:19.901753 2184 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:44:19.901980 kubelet[2184]: E0306 01:44:19.901919 2184 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:20.103983 kubelet[2184]: I0306 01:44:20.103890 2184 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 6 01:44:20.916738 kubelet[2184]: E0306 01:44:20.914309 2184 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:44:20.933807 kubelet[2184]: E0306 01:44:20.916183 2184 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:44:20.935064 kubelet[2184]: E0306 01:44:20.933999 2184 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:20.935064 kubelet[2184]: E0306 01:44:20.934718 2184 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:44:20.935064 kubelet[2184]: E0306 01:44:20.935002 2184 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:20.935064 kubelet[2184]: E0306 01:44:20.935002 2184 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:21.913737 kubelet[2184]: E0306 01:44:21.913625 2184 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:44:21.914275 kubelet[2184]: E0306 01:44:21.913844 2184 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:21.914275 kubelet[2184]: E0306 01:44:21.914200 2184 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:44:21.914412 kubelet[2184]: E0306 01:44:21.914343 2184 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:21.915792 kubelet[2184]: E0306 01:44:21.915741 2184 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:44:21.916003 kubelet[2184]: E0306 01:44:21.915953 2184 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:22.065847 kubelet[2184]: E0306 01:44:22.065769 2184 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 6 01:44:22.192959 kubelet[2184]: I0306 01:44:22.190707 2184 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 6 01:44:22.201400 kubelet[2184]: I0306 01:44:22.200656 2184 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 6 01:44:22.258761 kubelet[2184]: E0306 01:44:22.258604 2184 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.189a1d27018c5806 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-06 01:44:16.690616326 +0000 UTC m=+1.390326531,LastTimestamp:2026-03-06 01:44:16.690616326 +0000 UTC m=+1.390326531,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 6 01:44:22.260488 kubelet[2184]: E0306 01:44:22.260355 2184 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 6 01:44:22.260488 kubelet[2184]: I0306 01:44:22.260416 2184 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 6 01:44:22.268912 kubelet[2184]: E0306 01:44:22.268843 2184 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 6 01:44:22.268912 kubelet[2184]: I0306 01:44:22.268911 2184 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 6 01:44:22.274866 kubelet[2184]: E0306 01:44:22.274730 2184 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 6 01:44:22.681610 kubelet[2184]: I0306 01:44:22.681416 2184 apiserver.go:52] "Watching apiserver" Mar 6 01:44:22.699324 kubelet[2184]: I0306 01:44:22.699121 2184 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 6 01:44:24.137314 kubelet[2184]: I0306 01:44:24.136868 2184 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 6 01:44:24.160606 kubelet[2184]: E0306 01:44:24.160375 2184 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:24.834683 systemd[1]: Reloading requested from client PID 2471 ('systemctl') (unit session-9.scope)... Mar 6 01:44:24.834724 systemd[1]: Reloading... Mar 6 01:44:24.925730 kubelet[2184]: E0306 01:44:24.925268 2184 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:24.948560 zram_generator::config[2510]: No configuration found. Mar 6 01:44:25.075626 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 6 01:44:25.181297 systemd[1]: Reloading finished in 345 ms. Mar 6 01:44:25.245905 kubelet[2184]: I0306 01:44:25.245831 2184 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 6 01:44:25.245939 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:44:25.272632 systemd[1]: kubelet.service: Deactivated successfully. Mar 6 01:44:25.273173 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:44:25.273270 systemd[1]: kubelet.service: Consumed 3.139s CPU time, 129.7M memory peak, 0B memory swap peak. Mar 6 01:44:25.283920 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:44:25.477947 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:44:25.490837 (kubelet)[2555]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 6 01:44:25.588074 kubelet[2555]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 6 01:44:25.588074 kubelet[2555]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 6 01:44:25.588886 kubelet[2555]: I0306 01:44:25.588093 2555 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 6 01:44:25.596168 kubelet[2555]: I0306 01:44:25.596117 2555 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 6 01:44:25.596168 kubelet[2555]: I0306 01:44:25.596170 2555 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 6 01:44:25.596287 kubelet[2555]: I0306 01:44:25.596236 2555 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 6 01:44:25.596287 kubelet[2555]: I0306 01:44:25.596254 2555 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 6 01:44:25.597163 kubelet[2555]: I0306 01:44:25.596881 2555 server.go:956] "Client rotation is on, will bootstrap in background" Mar 6 01:44:25.598522 kubelet[2555]: I0306 01:44:25.598492 2555 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 6 01:44:25.600652 kubelet[2555]: I0306 01:44:25.600602 2555 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 6 01:44:25.605506 kubelet[2555]: E0306 01:44:25.605362 2555 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 6 01:44:25.605506 kubelet[2555]: I0306 01:44:25.605500 2555 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 6 01:44:25.611238 kubelet[2555]: I0306 01:44:25.611146 2555 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 6 01:44:25.611559 kubelet[2555]: I0306 01:44:25.611413 2555 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 6 01:44:25.611640 kubelet[2555]: I0306 01:44:25.611511 2555 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 6 01:44:25.611640 kubelet[2555]: I0306 01:44:25.611620 2555 topology_manager.go:138] "Creating topology manager with none policy" Mar 6 01:44:25.611640 kubelet[2555]: I0306 01:44:25.611627 2555 container_manager_linux.go:306] "Creating device plugin manager" Mar 6 01:44:25.611872 kubelet[2555]: I0306 01:44:25.611647 2555 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 6 01:44:25.611872 kubelet[2555]: I0306 01:44:25.611836 2555 state_mem.go:36] "Initialized new in-memory state store" Mar 6 01:44:25.612117 kubelet[2555]: I0306 01:44:25.612020 2555 kubelet.go:475] "Attempting to sync node with API server" Mar 6 01:44:25.612117 kubelet[2555]: I0306 01:44:25.612084 2555 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 6 01:44:25.612117 kubelet[2555]: I0306 01:44:25.612104 2555 kubelet.go:387] "Adding apiserver pod source" Mar 6 01:44:25.612117 kubelet[2555]: I0306 01:44:25.612113 2555 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 6 01:44:25.613522 kubelet[2555]: I0306 01:44:25.613403 2555 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 6 01:44:25.615067 kubelet[2555]: I0306 01:44:25.613894 2555 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 6 01:44:25.615067 kubelet[2555]: I0306 01:44:25.613920 2555 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 6 01:44:25.623514 kubelet[2555]: I0306 01:44:25.622207 2555 server.go:1262] "Started kubelet" Mar 6 01:44:25.624111 kubelet[2555]: I0306 01:44:25.624035 2555 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 6 01:44:25.632267 kubelet[2555]: I0306 01:44:25.631530 2555 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 6 01:44:25.632620 kubelet[2555]: I0306 01:44:25.632595 2555 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 6 01:44:25.632756 kubelet[2555]: I0306 01:44:25.632740 2555 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 6 01:44:25.633178 kubelet[2555]: I0306 01:44:25.633155 2555 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 6 01:44:25.635933 kubelet[2555]: I0306 01:44:25.635918 2555 server.go:310] "Adding debug handlers to kubelet server" Mar 6 01:44:25.639021 kubelet[2555]: I0306 01:44:25.639004 2555 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 6 01:44:25.643266 kubelet[2555]: E0306 01:44:25.639982 2555 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 6 01:44:25.644417 kubelet[2555]: I0306 01:44:25.644364 2555 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 6 01:44:25.644639 kubelet[2555]: I0306 01:44:25.644601 2555 factory.go:223] Registration of the systemd container factory successfully Mar 6 01:44:25.644763 kubelet[2555]: I0306 01:44:25.644698 2555 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 6 01:44:25.645180 kubelet[2555]: I0306 01:44:25.644924 2555 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 6 01:44:25.645505 kubelet[2555]: I0306 01:44:25.645391 2555 reconciler.go:29] "Reconciler: start to sync state" Mar 6 01:44:25.649650 kubelet[2555]: I0306 01:44:25.649582 2555 factory.go:223] Registration of the containerd container factory successfully Mar 6 01:44:25.658643 kubelet[2555]: I0306 01:44:25.658578 2555 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 6 01:44:25.676589 kubelet[2555]: I0306 01:44:25.676554 2555 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 6 01:44:25.677305 kubelet[2555]: I0306 01:44:25.676841 2555 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 6 01:44:25.677305 kubelet[2555]: I0306 01:44:25.676877 2555 kubelet.go:2428] "Starting kubelet main sync loop" Mar 6 01:44:25.677305 kubelet[2555]: E0306 01:44:25.676925 2555 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 6 01:44:25.714570 kubelet[2555]: I0306 01:44:25.714403 2555 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 6 01:44:25.715274 kubelet[2555]: I0306 01:44:25.715137 2555 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 6 01:44:25.715274 kubelet[2555]: I0306 01:44:25.715184 2555 state_mem.go:36] "Initialized new in-memory state store" Mar 6 01:44:25.715553 kubelet[2555]: I0306 01:44:25.715522 2555 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 6 01:44:25.715553 kubelet[2555]: I0306 01:44:25.715534 2555 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 6 01:44:25.716661 kubelet[2555]: I0306 01:44:25.715652 2555 policy_none.go:49] "None policy: Start" Mar 6 01:44:25.716661 kubelet[2555]: I0306 01:44:25.715703 2555 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 6 01:44:25.716661 kubelet[2555]: I0306 01:44:25.715733 2555 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 6 01:44:25.716661 kubelet[2555]: I0306 01:44:25.715841 2555 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 6 01:44:25.716661 kubelet[2555]: I0306 01:44:25.715873 2555 policy_none.go:47] "Start" Mar 6 01:44:25.744598 kubelet[2555]: E0306 01:44:25.744324 2555 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 6 01:44:25.747738 kubelet[2555]: I0306 01:44:25.747673 2555 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 6 01:44:25.747738 kubelet[2555]: I0306 01:44:25.747707 2555 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 6 01:44:25.753581 kubelet[2555]: I0306 01:44:25.748374 2555 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 6 01:44:25.763869 kubelet[2555]: E0306 01:44:25.763704 2555 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 6 01:44:25.780960 kubelet[2555]: I0306 01:44:25.780822 2555 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 6 01:44:25.781651 kubelet[2555]: I0306 01:44:25.780844 2555 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 6 01:44:25.783762 kubelet[2555]: I0306 01:44:25.783641 2555 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 6 01:44:25.815968 kubelet[2555]: E0306 01:44:25.815679 2555 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 6 01:44:25.848762 kubelet[2555]: I0306 01:44:25.848150 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:44:25.848762 kubelet[2555]: I0306 01:44:25.848177 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:44:25.848762 kubelet[2555]: I0306 01:44:25.848194 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3ac9f33d3563032bb7738778243081c2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3ac9f33d3563032bb7738778243081c2\") " pod="kube-system/kube-apiserver-localhost" Mar 6 01:44:25.848762 kubelet[2555]: I0306 01:44:25.848419 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:44:25.848762 kubelet[2555]: I0306 01:44:25.848536 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:44:25.849539 kubelet[2555]: I0306 01:44:25.848618 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 6 01:44:25.849539 kubelet[2555]: I0306 01:44:25.848635 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3ac9f33d3563032bb7738778243081c2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3ac9f33d3563032bb7738778243081c2\") " pod="kube-system/kube-apiserver-localhost" Mar 6 01:44:25.849539 kubelet[2555]: I0306 01:44:25.848656 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3ac9f33d3563032bb7738778243081c2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3ac9f33d3563032bb7738778243081c2\") " pod="kube-system/kube-apiserver-localhost" Mar 6 01:44:25.849539 kubelet[2555]: I0306 01:44:25.848679 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:44:25.848878 sudo[2597]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 6 01:44:25.849558 sudo[2597]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 6 01:44:25.885658 kubelet[2555]: I0306 01:44:25.885503 2555 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 6 01:44:25.897346 kubelet[2555]: I0306 01:44:25.897297 2555 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 6 01:44:25.897766 kubelet[2555]: I0306 01:44:25.897502 2555 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 6 01:44:26.116289 kubelet[2555]: E0306 01:44:26.116208 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:26.118247 kubelet[2555]: E0306 01:44:26.118013 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:26.118318 kubelet[2555]: E0306 01:44:26.118265 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:26.571819 sudo[2597]: pam_unix(sudo:session): session closed for user root Mar 6 01:44:26.614302 kubelet[2555]: I0306 01:44:26.614078 2555 apiserver.go:52] "Watching apiserver" Mar 6 01:44:26.645261 kubelet[2555]: I0306 01:44:26.645107 2555 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 6 01:44:26.693188 kubelet[2555]: E0306 01:44:26.693123 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:26.695517 kubelet[2555]: E0306 01:44:26.693401 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:26.695517 kubelet[2555]: E0306 01:44:26.693796 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:26.736409 kubelet[2555]: I0306 01:44:26.736280 2555 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.736183053 podStartE2EDuration="2.736183053s" podCreationTimestamp="2026-03-06 01:44:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 01:44:26.725421506 +0000 UTC m=+1.229173760" watchObservedRunningTime="2026-03-06 01:44:26.736183053 +0000 UTC m=+1.239935297" Mar 6 01:44:26.736409 kubelet[2555]: I0306 01:44:26.736386 2555 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.736381952 podStartE2EDuration="1.736381952s" podCreationTimestamp="2026-03-06 01:44:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 01:44:26.73565362 +0000 UTC m=+1.239405874" watchObservedRunningTime="2026-03-06 01:44:26.736381952 +0000 UTC m=+1.240134196" Mar 6 01:44:26.746102 kubelet[2555]: I0306 01:44:26.745949 2555 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.745930505 podStartE2EDuration="1.745930505s" podCreationTimestamp="2026-03-06 01:44:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 01:44:26.744306213 +0000 UTC m=+1.248058467" watchObservedRunningTime="2026-03-06 01:44:26.745930505 +0000 UTC m=+1.249682749" Mar 6 01:44:27.695244 kubelet[2555]: E0306 01:44:27.694882 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:27.695244 kubelet[2555]: E0306 01:44:27.695206 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:27.980243 sudo[1645]: pam_unix(sudo:session): session closed for user root Mar 6 01:44:27.983693 sshd[1642]: pam_unix(sshd:session): session closed for user core Mar 6 01:44:27.991099 systemd[1]: sshd@8-10.0.0.105:22-10.0.0.1:59180.service: Deactivated successfully. Mar 6 01:44:27.994052 systemd[1]: session-9.scope: Deactivated successfully. Mar 6 01:44:27.994341 systemd[1]: session-9.scope: Consumed 9.639s CPU time, 160.3M memory peak, 0B memory swap peak. Mar 6 01:44:27.996615 systemd-logind[1442]: Session 9 logged out. Waiting for processes to exit. Mar 6 01:44:27.998427 systemd-logind[1442]: Removed session 9. Mar 6 01:44:28.696685 kubelet[2555]: E0306 01:44:28.696599 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:28.696685 kubelet[2555]: E0306 01:44:28.696610 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:29.698518 kubelet[2555]: E0306 01:44:29.698353 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:30.702505 kubelet[2555]: E0306 01:44:30.701541 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:31.136096 kubelet[2555]: E0306 01:44:31.136066 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:31.479559 kubelet[2555]: I0306 01:44:31.479322 2555 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 6 01:44:31.479908 containerd[1453]: time="2026-03-06T01:44:31.479802199Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 6 01:44:31.480407 kubelet[2555]: I0306 01:44:31.480347 2555 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 6 01:44:31.702982 kubelet[2555]: E0306 01:44:31.702756 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:32.469063 systemd[1]: Created slice kubepods-besteffort-pod706c5a87_a579_44d5_a0df_a0f1130b75b4.slice - libcontainer container kubepods-besteffort-pod706c5a87_a579_44d5_a0df_a0f1130b75b4.slice. Mar 6 01:44:32.486798 systemd[1]: Created slice kubepods-burstable-pod0863c04f_8c11_43b1_a4a0_c84fc353665d.slice - libcontainer container kubepods-burstable-pod0863c04f_8c11_43b1_a4a0_c84fc353665d.slice. Mar 6 01:44:32.497274 kubelet[2555]: I0306 01:44:32.497243 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0863c04f-8c11-43b1-a4a0-c84fc353665d-xtables-lock\") pod \"cilium-x6vws\" (UID: \"0863c04f-8c11-43b1-a4a0-c84fc353665d\") " pod="kube-system/cilium-x6vws" Mar 6 01:44:32.497531 kubelet[2555]: I0306 01:44:32.497412 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0863c04f-8c11-43b1-a4a0-c84fc353665d-clustermesh-secrets\") pod \"cilium-x6vws\" (UID: \"0863c04f-8c11-43b1-a4a0-c84fc353665d\") " pod="kube-system/cilium-x6vws" Mar 6 01:44:32.497685 kubelet[2555]: I0306 01:44:32.497662 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0863c04f-8c11-43b1-a4a0-c84fc353665d-host-proc-sys-net\") pod \"cilium-x6vws\" (UID: \"0863c04f-8c11-43b1-a4a0-c84fc353665d\") " pod="kube-system/cilium-x6vws" Mar 6 01:44:32.497803 kubelet[2555]: I0306 01:44:32.497781 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/706c5a87-a579-44d5-a0df-a0f1130b75b4-xtables-lock\") pod \"kube-proxy-2m69l\" (UID: \"706c5a87-a579-44d5-a0df-a0f1130b75b4\") " pod="kube-system/kube-proxy-2m69l" Mar 6 01:44:32.498181 kubelet[2555]: I0306 01:44:32.498018 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0863c04f-8c11-43b1-a4a0-c84fc353665d-bpf-maps\") pod \"cilium-x6vws\" (UID: \"0863c04f-8c11-43b1-a4a0-c84fc353665d\") " pod="kube-system/cilium-x6vws" Mar 6 01:44:32.498181 kubelet[2555]: I0306 01:44:32.498111 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0863c04f-8c11-43b1-a4a0-c84fc353665d-cni-path\") pod \"cilium-x6vws\" (UID: \"0863c04f-8c11-43b1-a4a0-c84fc353665d\") " pod="kube-system/cilium-x6vws" Mar 6 01:44:32.498181 kubelet[2555]: I0306 01:44:32.498144 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0863c04f-8c11-43b1-a4a0-c84fc353665d-etc-cni-netd\") pod \"cilium-x6vws\" (UID: \"0863c04f-8c11-43b1-a4a0-c84fc353665d\") " pod="kube-system/cilium-x6vws" Mar 6 01:44:32.498181 kubelet[2555]: I0306 01:44:32.498174 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0863c04f-8c11-43b1-a4a0-c84fc353665d-cilium-config-path\") pod \"cilium-x6vws\" (UID: \"0863c04f-8c11-43b1-a4a0-c84fc353665d\") " pod="kube-system/cilium-x6vws" Mar 6 01:44:32.498377 kubelet[2555]: I0306 01:44:32.498201 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfz29\" (UniqueName: \"kubernetes.io/projected/0863c04f-8c11-43b1-a4a0-c84fc353665d-kube-api-access-cfz29\") pod \"cilium-x6vws\" (UID: \"0863c04f-8c11-43b1-a4a0-c84fc353665d\") " pod="kube-system/cilium-x6vws" Mar 6 01:44:32.498377 kubelet[2555]: I0306 01:44:32.498240 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm2z9\" (UniqueName: \"kubernetes.io/projected/706c5a87-a579-44d5-a0df-a0f1130b75b4-kube-api-access-qm2z9\") pod \"kube-proxy-2m69l\" (UID: \"706c5a87-a579-44d5-a0df-a0f1130b75b4\") " pod="kube-system/kube-proxy-2m69l" Mar 6 01:44:32.498377 kubelet[2555]: I0306 01:44:32.498267 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0863c04f-8c11-43b1-a4a0-c84fc353665d-hostproc\") pod \"cilium-x6vws\" (UID: \"0863c04f-8c11-43b1-a4a0-c84fc353665d\") " pod="kube-system/cilium-x6vws" Mar 6 01:44:32.498377 kubelet[2555]: I0306 01:44:32.498311 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0863c04f-8c11-43b1-a4a0-c84fc353665d-host-proc-sys-kernel\") pod \"cilium-x6vws\" (UID: \"0863c04f-8c11-43b1-a4a0-c84fc353665d\") " pod="kube-system/cilium-x6vws" Mar 6 01:44:32.498377 kubelet[2555]: I0306 01:44:32.498339 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0863c04f-8c11-43b1-a4a0-c84fc353665d-hubble-tls\") pod \"cilium-x6vws\" (UID: \"0863c04f-8c11-43b1-a4a0-c84fc353665d\") " pod="kube-system/cilium-x6vws" Mar 6 01:44:32.498674 kubelet[2555]: I0306 01:44:32.498370 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0863c04f-8c11-43b1-a4a0-c84fc353665d-cilium-run\") pod \"cilium-x6vws\" (UID: \"0863c04f-8c11-43b1-a4a0-c84fc353665d\") " pod="kube-system/cilium-x6vws" Mar 6 01:44:32.498674 kubelet[2555]: I0306 01:44:32.498411 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0863c04f-8c11-43b1-a4a0-c84fc353665d-lib-modules\") pod \"cilium-x6vws\" (UID: \"0863c04f-8c11-43b1-a4a0-c84fc353665d\") " pod="kube-system/cilium-x6vws" Mar 6 01:44:32.498674 kubelet[2555]: I0306 01:44:32.498513 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/706c5a87-a579-44d5-a0df-a0f1130b75b4-kube-proxy\") pod \"kube-proxy-2m69l\" (UID: \"706c5a87-a579-44d5-a0df-a0f1130b75b4\") " pod="kube-system/kube-proxy-2m69l" Mar 6 01:44:32.498674 kubelet[2555]: I0306 01:44:32.498563 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/706c5a87-a579-44d5-a0df-a0f1130b75b4-lib-modules\") pod \"kube-proxy-2m69l\" (UID: \"706c5a87-a579-44d5-a0df-a0f1130b75b4\") " pod="kube-system/kube-proxy-2m69l" Mar 6 01:44:32.498674 kubelet[2555]: I0306 01:44:32.498596 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0863c04f-8c11-43b1-a4a0-c84fc353665d-cilium-cgroup\") pod \"cilium-x6vws\" (UID: \"0863c04f-8c11-43b1-a4a0-c84fc353665d\") " pod="kube-system/cilium-x6vws" Mar 6 01:44:32.682608 systemd[1]: Created slice kubepods-besteffort-podaa8348b8_0358_4c84_a119_9e16eec798fe.slice - libcontainer container kubepods-besteffort-podaa8348b8_0358_4c84_a119_9e16eec798fe.slice. Mar 6 01:44:32.702058 kubelet[2555]: I0306 01:44:32.701930 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aa8348b8-0358-4c84-a119-9e16eec798fe-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-v9xt7\" (UID: \"aa8348b8-0358-4c84-a119-9e16eec798fe\") " pod="kube-system/cilium-operator-6f9c7c5859-v9xt7" Mar 6 01:44:32.702058 kubelet[2555]: I0306 01:44:32.702049 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqc6q\" (UniqueName: \"kubernetes.io/projected/aa8348b8-0358-4c84-a119-9e16eec798fe-kube-api-access-nqc6q\") pod \"cilium-operator-6f9c7c5859-v9xt7\" (UID: \"aa8348b8-0358-4c84-a119-9e16eec798fe\") " pod="kube-system/cilium-operator-6f9c7c5859-v9xt7" Mar 6 01:44:32.785381 kubelet[2555]: E0306 01:44:32.785180 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:32.786624 containerd[1453]: time="2026-03-06T01:44:32.786538170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2m69l,Uid:706c5a87-a579-44d5-a0df-a0f1130b75b4,Namespace:kube-system,Attempt:0,}" Mar 6 01:44:32.795830 kubelet[2555]: E0306 01:44:32.795733 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:32.797239 containerd[1453]: time="2026-03-06T01:44:32.797152611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x6vws,Uid:0863c04f-8c11-43b1-a4a0-c84fc353665d,Namespace:kube-system,Attempt:0,}" Mar 6 01:44:32.831758 containerd[1453]: time="2026-03-06T01:44:32.831348113Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:44:32.831758 containerd[1453]: time="2026-03-06T01:44:32.831498410Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:44:32.831758 containerd[1453]: time="2026-03-06T01:44:32.831523801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:44:32.831758 containerd[1453]: time="2026-03-06T01:44:32.831669149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:44:32.851642 containerd[1453]: time="2026-03-06T01:44:32.850844628Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:44:32.851642 containerd[1453]: time="2026-03-06T01:44:32.850968734Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:44:32.851642 containerd[1453]: time="2026-03-06T01:44:32.851048373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:44:32.851642 containerd[1453]: time="2026-03-06T01:44:32.851202127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:44:32.866795 systemd[1]: Started cri-containerd-d8e1a942d9ac4462a6ddf1efeb64b1715d3ad3d92625575ee931998407ba868a.scope - libcontainer container d8e1a942d9ac4462a6ddf1efeb64b1715d3ad3d92625575ee931998407ba868a. Mar 6 01:44:32.880891 systemd[1]: Started cri-containerd-49d086317879d90eff0b627a71d96962200f2e39fe151d79f597b0cc36c15e81.scope - libcontainer container 49d086317879d90eff0b627a71d96962200f2e39fe151d79f597b0cc36c15e81. Mar 6 01:44:32.912552 containerd[1453]: time="2026-03-06T01:44:32.911851364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2m69l,Uid:706c5a87-a579-44d5-a0df-a0f1130b75b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"d8e1a942d9ac4462a6ddf1efeb64b1715d3ad3d92625575ee931998407ba868a\"" Mar 6 01:44:32.913832 kubelet[2555]: E0306 01:44:32.913745 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:32.922923 containerd[1453]: time="2026-03-06T01:44:32.922811312Z" level=info msg="CreateContainer within sandbox \"d8e1a942d9ac4462a6ddf1efeb64b1715d3ad3d92625575ee931998407ba868a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 6 01:44:32.926222 containerd[1453]: time="2026-03-06T01:44:32.926133712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x6vws,Uid:0863c04f-8c11-43b1-a4a0-c84fc353665d,Namespace:kube-system,Attempt:0,} returns sandbox id \"49d086317879d90eff0b627a71d96962200f2e39fe151d79f597b0cc36c15e81\"" Mar 6 01:44:32.928131 kubelet[2555]: E0306 01:44:32.928109 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:32.932594 containerd[1453]: time="2026-03-06T01:44:32.930368775Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 6 01:44:32.948659 containerd[1453]: time="2026-03-06T01:44:32.948579203Z" level=info msg="CreateContainer within sandbox \"d8e1a942d9ac4462a6ddf1efeb64b1715d3ad3d92625575ee931998407ba868a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7d269377cf852b2c2609496a15e4734964db85681bbb8de2fd7700ea71d17cea\"" Mar 6 01:44:32.949588 containerd[1453]: time="2026-03-06T01:44:32.949506305Z" level=info msg="StartContainer for \"7d269377cf852b2c2609496a15e4734964db85681bbb8de2fd7700ea71d17cea\"" Mar 6 01:44:32.993713 kubelet[2555]: E0306 01:44:32.993549 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:32.994834 containerd[1453]: time="2026-03-06T01:44:32.994338890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-v9xt7,Uid:aa8348b8-0358-4c84-a119-9e16eec798fe,Namespace:kube-system,Attempt:0,}" Mar 6 01:44:32.995956 systemd[1]: Started cri-containerd-7d269377cf852b2c2609496a15e4734964db85681bbb8de2fd7700ea71d17cea.scope - libcontainer container 7d269377cf852b2c2609496a15e4734964db85681bbb8de2fd7700ea71d17cea. Mar 6 01:44:33.037538 containerd[1453]: time="2026-03-06T01:44:33.036540561Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:44:33.037538 containerd[1453]: time="2026-03-06T01:44:33.036820816Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:44:33.037538 containerd[1453]: time="2026-03-06T01:44:33.036840394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:44:33.037538 containerd[1453]: time="2026-03-06T01:44:33.037220055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:44:33.051690 containerd[1453]: time="2026-03-06T01:44:33.051602810Z" level=info msg="StartContainer for \"7d269377cf852b2c2609496a15e4734964db85681bbb8de2fd7700ea71d17cea\" returns successfully" Mar 6 01:44:33.081733 systemd[1]: Started cri-containerd-b2b67fc2d5a5b7ef6648c8bb0fbf3732ebf1bef8d3090ce708caeb4fc7d80f9a.scope - libcontainer container b2b67fc2d5a5b7ef6648c8bb0fbf3732ebf1bef8d3090ce708caeb4fc7d80f9a. Mar 6 01:44:33.139140 containerd[1453]: time="2026-03-06T01:44:33.139010534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-v9xt7,Uid:aa8348b8-0358-4c84-a119-9e16eec798fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"b2b67fc2d5a5b7ef6648c8bb0fbf3732ebf1bef8d3090ce708caeb4fc7d80f9a\"" Mar 6 01:44:33.142712 kubelet[2555]: E0306 01:44:33.141293 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:33.714719 kubelet[2555]: E0306 01:44:33.714612 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:36.980621 kubelet[2555]: E0306 01:44:36.980075 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:36.994014 kubelet[2555]: I0306 01:44:36.993793 2555 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2m69l" podStartSLOduration=4.992745283 podStartE2EDuration="4.992745283s" podCreationTimestamp="2026-03-06 01:44:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 01:44:33.732260771 +0000 UTC m=+8.236013065" watchObservedRunningTime="2026-03-06 01:44:36.992745283 +0000 UTC m=+11.496497537" Mar 6 01:44:37.726939 kubelet[2555]: E0306 01:44:37.726785 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:44.099852 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1752448302.mount: Deactivated successfully. Mar 6 01:44:46.547943 containerd[1453]: time="2026-03-06T01:44:46.547858050Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:44:46.549045 containerd[1453]: time="2026-03-06T01:44:46.548981608Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 6 01:44:46.550636 containerd[1453]: time="2026-03-06T01:44:46.550571843Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:44:46.552987 containerd[1453]: time="2026-03-06T01:44:46.552886529Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 13.620319781s" Mar 6 01:44:46.552987 containerd[1453]: time="2026-03-06T01:44:46.552961174Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 6 01:44:46.554550 containerd[1453]: time="2026-03-06T01:44:46.554496492Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 6 01:44:46.559376 containerd[1453]: time="2026-03-06T01:44:46.559166111Z" level=info msg="CreateContainer within sandbox \"49d086317879d90eff0b627a71d96962200f2e39fe151d79f597b0cc36c15e81\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 6 01:44:46.584926 containerd[1453]: time="2026-03-06T01:44:46.584822728Z" level=info msg="CreateContainer within sandbox \"49d086317879d90eff0b627a71d96962200f2e39fe151d79f597b0cc36c15e81\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"36f48350f108084045eccfb3d5096cef28e62dc5fa897d070cbae1217ec96455\"" Mar 6 01:44:46.585601 containerd[1453]: time="2026-03-06T01:44:46.585566247Z" level=info msg="StartContainer for \"36f48350f108084045eccfb3d5096cef28e62dc5fa897d070cbae1217ec96455\"" Mar 6 01:44:46.662919 systemd[1]: Started cri-containerd-36f48350f108084045eccfb3d5096cef28e62dc5fa897d070cbae1217ec96455.scope - libcontainer container 36f48350f108084045eccfb3d5096cef28e62dc5fa897d070cbae1217ec96455. Mar 6 01:44:46.719870 containerd[1453]: time="2026-03-06T01:44:46.719735375Z" level=info msg="StartContainer for \"36f48350f108084045eccfb3d5096cef28e62dc5fa897d070cbae1217ec96455\" returns successfully" Mar 6 01:44:46.753209 systemd[1]: cri-containerd-36f48350f108084045eccfb3d5096cef28e62dc5fa897d070cbae1217ec96455.scope: Deactivated successfully. Mar 6 01:44:46.926815 kubelet[2555]: E0306 01:44:46.926314 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:46.977744 containerd[1453]: time="2026-03-06T01:44:46.977309559Z" level=info msg="shim disconnected" id=36f48350f108084045eccfb3d5096cef28e62dc5fa897d070cbae1217ec96455 namespace=k8s.io Mar 6 01:44:46.977744 containerd[1453]: time="2026-03-06T01:44:46.977491303Z" level=warning msg="cleaning up after shim disconnected" id=36f48350f108084045eccfb3d5096cef28e62dc5fa897d070cbae1217ec96455 namespace=k8s.io Mar 6 01:44:46.977744 containerd[1453]: time="2026-03-06T01:44:46.977511933Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 6 01:44:47.578730 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-36f48350f108084045eccfb3d5096cef28e62dc5fa897d070cbae1217ec96455-rootfs.mount: Deactivated successfully. Mar 6 01:44:47.933741 kubelet[2555]: E0306 01:44:47.932875 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:47.955591 containerd[1453]: time="2026-03-06T01:44:47.955410024Z" level=info msg="CreateContainer within sandbox \"49d086317879d90eff0b627a71d96962200f2e39fe151d79f597b0cc36c15e81\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 6 01:44:47.982093 containerd[1453]: time="2026-03-06T01:44:47.981928594Z" level=info msg="CreateContainer within sandbox \"49d086317879d90eff0b627a71d96962200f2e39fe151d79f597b0cc36c15e81\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9fbcbb469558369feab798c6653b6125c484c6cdfa55506b607f4fd941902675\"" Mar 6 01:44:47.986559 containerd[1453]: time="2026-03-06T01:44:47.983869178Z" level=info msg="StartContainer for \"9fbcbb469558369feab798c6653b6125c484c6cdfa55506b607f4fd941902675\"" Mar 6 01:44:48.056740 systemd[1]: Started cri-containerd-9fbcbb469558369feab798c6653b6125c484c6cdfa55506b607f4fd941902675.scope - libcontainer container 9fbcbb469558369feab798c6653b6125c484c6cdfa55506b607f4fd941902675. Mar 6 01:44:48.114250 containerd[1453]: time="2026-03-06T01:44:48.114138870Z" level=info msg="StartContainer for \"9fbcbb469558369feab798c6653b6125c484c6cdfa55506b607f4fd941902675\" returns successfully" Mar 6 01:44:48.137017 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 6 01:44:48.137336 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 6 01:44:48.137552 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 6 01:44:48.146274 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 6 01:44:48.146894 systemd[1]: cri-containerd-9fbcbb469558369feab798c6653b6125c484c6cdfa55506b607f4fd941902675.scope: Deactivated successfully. Mar 6 01:44:48.207545 containerd[1453]: time="2026-03-06T01:44:48.207307058Z" level=info msg="shim disconnected" id=9fbcbb469558369feab798c6653b6125c484c6cdfa55506b607f4fd941902675 namespace=k8s.io Mar 6 01:44:48.207545 containerd[1453]: time="2026-03-06T01:44:48.207397384Z" level=warning msg="cleaning up after shim disconnected" id=9fbcbb469558369feab798c6653b6125c484c6cdfa55506b607f4fd941902675 namespace=k8s.io Mar 6 01:44:48.207545 containerd[1453]: time="2026-03-06T01:44:48.207407293Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 6 01:44:48.222624 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 6 01:44:48.579181 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9fbcbb469558369feab798c6653b6125c484c6cdfa55506b607f4fd941902675-rootfs.mount: Deactivated successfully. Mar 6 01:44:48.603699 containerd[1453]: time="2026-03-06T01:44:48.603619714Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:44:48.604914 containerd[1453]: time="2026-03-06T01:44:48.604852866Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 6 01:44:48.606547 containerd[1453]: time="2026-03-06T01:44:48.606405198Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:44:48.609196 containerd[1453]: time="2026-03-06T01:44:48.609071193Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.054536114s" Mar 6 01:44:48.609196 containerd[1453]: time="2026-03-06T01:44:48.609129065Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 6 01:44:48.615046 containerd[1453]: time="2026-03-06T01:44:48.614821477Z" level=info msg="CreateContainer within sandbox \"b2b67fc2d5a5b7ef6648c8bb0fbf3732ebf1bef8d3090ce708caeb4fc7d80f9a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 6 01:44:48.635920 containerd[1453]: time="2026-03-06T01:44:48.635833602Z" level=info msg="CreateContainer within sandbox \"b2b67fc2d5a5b7ef6648c8bb0fbf3732ebf1bef8d3090ce708caeb4fc7d80f9a\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"39903bbfa8c4909edc7f695c326552e9bd621f8bdd8f05ea2bbd7081514bf123\"" Mar 6 01:44:48.636642 containerd[1453]: time="2026-03-06T01:44:48.636612056Z" level=info msg="StartContainer for \"39903bbfa8c4909edc7f695c326552e9bd621f8bdd8f05ea2bbd7081514bf123\"" Mar 6 01:44:48.703107 systemd[1]: Started cri-containerd-39903bbfa8c4909edc7f695c326552e9bd621f8bdd8f05ea2bbd7081514bf123.scope - libcontainer container 39903bbfa8c4909edc7f695c326552e9bd621f8bdd8f05ea2bbd7081514bf123. Mar 6 01:44:48.752763 containerd[1453]: time="2026-03-06T01:44:48.752608437Z" level=info msg="StartContainer for \"39903bbfa8c4909edc7f695c326552e9bd621f8bdd8f05ea2bbd7081514bf123\" returns successfully" Mar 6 01:44:48.937393 kubelet[2555]: E0306 01:44:48.937217 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:48.945606 kubelet[2555]: E0306 01:44:48.945513 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:48.955660 containerd[1453]: time="2026-03-06T01:44:48.955482528Z" level=info msg="CreateContainer within sandbox \"49d086317879d90eff0b627a71d96962200f2e39fe151d79f597b0cc36c15e81\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 6 01:44:48.989375 containerd[1453]: time="2026-03-06T01:44:48.989210718Z" level=info msg="CreateContainer within sandbox \"49d086317879d90eff0b627a71d96962200f2e39fe151d79f597b0cc36c15e81\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"683acde366dd8873de520ceeb042452bd4505e5b56081825dcc117a028b8c479\"" Mar 6 01:44:48.996499 containerd[1453]: time="2026-03-06T01:44:48.992179395Z" level=info msg="StartContainer for \"683acde366dd8873de520ceeb042452bd4505e5b56081825dcc117a028b8c479\"" Mar 6 01:44:49.056095 kubelet[2555]: I0306 01:44:49.055999 2555 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-v9xt7" podStartSLOduration=1.589694239 podStartE2EDuration="17.05598087s" podCreationTimestamp="2026-03-06 01:44:32 +0000 UTC" firstStartedPulling="2026-03-06 01:44:33.143685065 +0000 UTC m=+7.647437309" lastFinishedPulling="2026-03-06 01:44:48.609971696 +0000 UTC m=+23.113723940" observedRunningTime="2026-03-06 01:44:48.964292459 +0000 UTC m=+23.468044704" watchObservedRunningTime="2026-03-06 01:44:49.05598087 +0000 UTC m=+23.559733115" Mar 6 01:44:49.113736 systemd[1]: Started cri-containerd-683acde366dd8873de520ceeb042452bd4505e5b56081825dcc117a028b8c479.scope - libcontainer container 683acde366dd8873de520ceeb042452bd4505e5b56081825dcc117a028b8c479. Mar 6 01:44:49.188198 containerd[1453]: time="2026-03-06T01:44:49.187994218Z" level=info msg="StartContainer for \"683acde366dd8873de520ceeb042452bd4505e5b56081825dcc117a028b8c479\" returns successfully" Mar 6 01:44:49.191317 systemd[1]: cri-containerd-683acde366dd8873de520ceeb042452bd4505e5b56081825dcc117a028b8c479.scope: Deactivated successfully. Mar 6 01:44:49.256996 containerd[1453]: time="2026-03-06T01:44:49.256900599Z" level=info msg="shim disconnected" id=683acde366dd8873de520ceeb042452bd4505e5b56081825dcc117a028b8c479 namespace=k8s.io Mar 6 01:44:49.256996 containerd[1453]: time="2026-03-06T01:44:49.256977007Z" level=warning msg="cleaning up after shim disconnected" id=683acde366dd8873de520ceeb042452bd4505e5b56081825dcc117a028b8c479 namespace=k8s.io Mar 6 01:44:49.256996 containerd[1453]: time="2026-03-06T01:44:49.256993820Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 6 01:44:49.349385 containerd[1453]: time="2026-03-06T01:44:49.349205857Z" level=warning msg="cleanup warnings time=\"2026-03-06T01:44:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 6 01:44:49.955947 kubelet[2555]: E0306 01:44:49.955826 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:49.957156 kubelet[2555]: E0306 01:44:49.956323 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:49.963110 containerd[1453]: time="2026-03-06T01:44:49.962938242Z" level=info msg="CreateContainer within sandbox \"49d086317879d90eff0b627a71d96962200f2e39fe151d79f597b0cc36c15e81\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 6 01:44:50.045683 containerd[1453]: time="2026-03-06T01:44:50.045413439Z" level=info msg="CreateContainer within sandbox \"49d086317879d90eff0b627a71d96962200f2e39fe151d79f597b0cc36c15e81\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d097b227ae3c805f43dd66421f93fe4e59eaaa1419c41b032bce0d82c8374893\"" Mar 6 01:44:50.046762 containerd[1453]: time="2026-03-06T01:44:50.046684277Z" level=info msg="StartContainer for \"d097b227ae3c805f43dd66421f93fe4e59eaaa1419c41b032bce0d82c8374893\"" Mar 6 01:44:50.119224 systemd[1]: Started cri-containerd-d097b227ae3c805f43dd66421f93fe4e59eaaa1419c41b032bce0d82c8374893.scope - libcontainer container d097b227ae3c805f43dd66421f93fe4e59eaaa1419c41b032bce0d82c8374893. Mar 6 01:44:50.166876 systemd[1]: cri-containerd-d097b227ae3c805f43dd66421f93fe4e59eaaa1419c41b032bce0d82c8374893.scope: Deactivated successfully. Mar 6 01:44:50.169707 containerd[1453]: time="2026-03-06T01:44:50.169609054Z" level=info msg="StartContainer for \"d097b227ae3c805f43dd66421f93fe4e59eaaa1419c41b032bce0d82c8374893\" returns successfully" Mar 6 01:44:50.201540 containerd[1453]: time="2026-03-06T01:44:50.201361033Z" level=info msg="shim disconnected" id=d097b227ae3c805f43dd66421f93fe4e59eaaa1419c41b032bce0d82c8374893 namespace=k8s.io Mar 6 01:44:50.201828 containerd[1453]: time="2026-03-06T01:44:50.201581902Z" level=warning msg="cleaning up after shim disconnected" id=d097b227ae3c805f43dd66421f93fe4e59eaaa1419c41b032bce0d82c8374893 namespace=k8s.io Mar 6 01:44:50.201828 containerd[1453]: time="2026-03-06T01:44:50.201602402Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 6 01:44:50.580501 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d097b227ae3c805f43dd66421f93fe4e59eaaa1419c41b032bce0d82c8374893-rootfs.mount: Deactivated successfully. Mar 6 01:44:50.962102 kubelet[2555]: E0306 01:44:50.961847 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:50.971201 containerd[1453]: time="2026-03-06T01:44:50.971114624Z" level=info msg="CreateContainer within sandbox \"49d086317879d90eff0b627a71d96962200f2e39fe151d79f597b0cc36c15e81\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 6 01:44:50.996022 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount950644043.mount: Deactivated successfully. Mar 6 01:44:50.998244 containerd[1453]: time="2026-03-06T01:44:50.998161443Z" level=info msg="CreateContainer within sandbox \"49d086317879d90eff0b627a71d96962200f2e39fe151d79f597b0cc36c15e81\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9ec7efceb4864137ae1857e8b00d18e71f9b64c6fcf20635472889089b6de2ef\"" Mar 6 01:44:50.999016 containerd[1453]: time="2026-03-06T01:44:50.998975538Z" level=info msg="StartContainer for \"9ec7efceb4864137ae1857e8b00d18e71f9b64c6fcf20635472889089b6de2ef\"" Mar 6 01:44:51.039946 systemd[1]: Started cri-containerd-9ec7efceb4864137ae1857e8b00d18e71f9b64c6fcf20635472889089b6de2ef.scope - libcontainer container 9ec7efceb4864137ae1857e8b00d18e71f9b64c6fcf20635472889089b6de2ef. Mar 6 01:44:51.128959 containerd[1453]: time="2026-03-06T01:44:51.128811750Z" level=info msg="StartContainer for \"9ec7efceb4864137ae1857e8b00d18e71f9b64c6fcf20635472889089b6de2ef\" returns successfully" Mar 6 01:44:51.307866 kubelet[2555]: I0306 01:44:51.307838 2555 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 6 01:44:51.356573 systemd[1]: Created slice kubepods-burstable-pode6784957_790d_4882_8b7b_59cc89176488.slice - libcontainer container kubepods-burstable-pode6784957_790d_4882_8b7b_59cc89176488.slice. Mar 6 01:44:51.367902 systemd[1]: Created slice kubepods-burstable-pod23687849_98ab_49ef_b66f_84562b33242e.slice - libcontainer container kubepods-burstable-pod23687849_98ab_49ef_b66f_84562b33242e.slice. Mar 6 01:44:51.423965 kubelet[2555]: I0306 01:44:51.423895 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87ds2\" (UniqueName: \"kubernetes.io/projected/e6784957-790d-4882-8b7b-59cc89176488-kube-api-access-87ds2\") pod \"coredns-66bc5c9577-zw55v\" (UID: \"e6784957-790d-4882-8b7b-59cc89176488\") " pod="kube-system/coredns-66bc5c9577-zw55v" Mar 6 01:44:51.423965 kubelet[2555]: I0306 01:44:51.423944 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/23687849-98ab-49ef-b66f-84562b33242e-config-volume\") pod \"coredns-66bc5c9577-j4j4c\" (UID: \"23687849-98ab-49ef-b66f-84562b33242e\") " pod="kube-system/coredns-66bc5c9577-j4j4c" Mar 6 01:44:51.423965 kubelet[2555]: I0306 01:44:51.423965 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vppb8\" (UniqueName: \"kubernetes.io/projected/23687849-98ab-49ef-b66f-84562b33242e-kube-api-access-vppb8\") pod \"coredns-66bc5c9577-j4j4c\" (UID: \"23687849-98ab-49ef-b66f-84562b33242e\") " pod="kube-system/coredns-66bc5c9577-j4j4c" Mar 6 01:44:51.423965 kubelet[2555]: I0306 01:44:51.423982 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e6784957-790d-4882-8b7b-59cc89176488-config-volume\") pod \"coredns-66bc5c9577-zw55v\" (UID: \"e6784957-790d-4882-8b7b-59cc89176488\") " pod="kube-system/coredns-66bc5c9577-zw55v" Mar 6 01:44:51.671760 kubelet[2555]: E0306 01:44:51.671354 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:51.675117 containerd[1453]: time="2026-03-06T01:44:51.674387508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zw55v,Uid:e6784957-790d-4882-8b7b-59cc89176488,Namespace:kube-system,Attempt:0,}" Mar 6 01:44:51.678665 kubelet[2555]: E0306 01:44:51.677590 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:51.681036 containerd[1453]: time="2026-03-06T01:44:51.678249089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-j4j4c,Uid:23687849-98ab-49ef-b66f-84562b33242e,Namespace:kube-system,Attempt:0,}" Mar 6 01:44:52.034209 kubelet[2555]: E0306 01:44:52.034028 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:52.053125 kubelet[2555]: I0306 01:44:52.052972 2555 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-x6vws" podStartSLOduration=6.428088344 podStartE2EDuration="20.05294969s" podCreationTimestamp="2026-03-06 01:44:32 +0000 UTC" firstStartedPulling="2026-03-06 01:44:32.929252242 +0000 UTC m=+7.433004486" lastFinishedPulling="2026-03-06 01:44:46.554113587 +0000 UTC m=+21.057865832" observedRunningTime="2026-03-06 01:44:52.052031581 +0000 UTC m=+26.555783835" watchObservedRunningTime="2026-03-06 01:44:52.05294969 +0000 UTC m=+26.556701934" Mar 6 01:44:53.044054 kubelet[2555]: E0306 01:44:53.043753 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:53.525233 systemd-networkd[1383]: cilium_host: Link UP Mar 6 01:44:53.525570 systemd-networkd[1383]: cilium_net: Link UP Mar 6 01:44:53.525794 systemd-networkd[1383]: cilium_net: Gained carrier Mar 6 01:44:53.526046 systemd-networkd[1383]: cilium_host: Gained carrier Mar 6 01:44:53.659629 systemd-networkd[1383]: cilium_host: Gained IPv6LL Mar 6 01:44:53.667604 systemd-networkd[1383]: cilium_vxlan: Link UP Mar 6 01:44:53.667613 systemd-networkd[1383]: cilium_vxlan: Gained carrier Mar 6 01:44:53.892784 systemd-networkd[1383]: cilium_net: Gained IPv6LL Mar 6 01:44:53.929521 kernel: NET: Registered PF_ALG protocol family Mar 6 01:44:54.051397 kubelet[2555]: E0306 01:44:54.051355 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:54.911921 systemd-networkd[1383]: lxc_health: Link UP Mar 6 01:44:54.917106 systemd-networkd[1383]: lxc_health: Gained carrier Mar 6 01:44:55.053634 kubelet[2555]: E0306 01:44:55.053533 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:55.289529 systemd-networkd[1383]: lxcae1ca728da03: Link UP Mar 6 01:44:55.302866 kernel: eth0: renamed from tmp4694c Mar 6 01:44:55.308885 systemd-networkd[1383]: lxcae1ca728da03: Gained carrier Mar 6 01:44:55.312520 systemd-networkd[1383]: lxc0840930d63a4: Link UP Mar 6 01:44:55.324583 kernel: eth0: renamed from tmpc6907 Mar 6 01:44:55.338494 systemd-networkd[1383]: lxc0840930d63a4: Gained carrier Mar 6 01:44:55.667153 systemd-networkd[1383]: cilium_vxlan: Gained IPv6LL Mar 6 01:44:56.370764 systemd-networkd[1383]: lxc_health: Gained IPv6LL Mar 6 01:44:56.498757 systemd-networkd[1383]: lxc0840930d63a4: Gained IPv6LL Mar 6 01:44:56.797232 kubelet[2555]: E0306 01:44:56.795600 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:57.058853 kubelet[2555]: E0306 01:44:57.058821 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:57.074765 systemd-networkd[1383]: lxcae1ca728da03: Gained IPv6LL Mar 6 01:45:01.482161 containerd[1453]: time="2026-03-06T01:45:01.480791419Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:45:01.482161 containerd[1453]: time="2026-03-06T01:45:01.481767425Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:45:01.482161 containerd[1453]: time="2026-03-06T01:45:01.481793205Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:45:01.482161 containerd[1453]: time="2026-03-06T01:45:01.481905692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:45:01.541070 systemd[1]: run-containerd-runc-k8s.io-c6907975af0c0bdbc269066ad1e76a27d9f490ca957cc2273d8aa4caf8f8be9b-runc.tjrZ1X.mount: Deactivated successfully. Mar 6 01:45:01.563706 systemd[1]: Started cri-containerd-c6907975af0c0bdbc269066ad1e76a27d9f490ca957cc2273d8aa4caf8f8be9b.scope - libcontainer container c6907975af0c0bdbc269066ad1e76a27d9f490ca957cc2273d8aa4caf8f8be9b. Mar 6 01:45:01.598770 systemd-resolved[1384]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 01:45:01.656847 containerd[1453]: time="2026-03-06T01:45:01.648853342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:45:01.656847 containerd[1453]: time="2026-03-06T01:45:01.648915262Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:45:01.656847 containerd[1453]: time="2026-03-06T01:45:01.648983843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:45:01.656847 containerd[1453]: time="2026-03-06T01:45:01.649133031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:45:01.672971 containerd[1453]: time="2026-03-06T01:45:01.672873804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-j4j4c,Uid:23687849-98ab-49ef-b66f-84562b33242e,Namespace:kube-system,Attempt:0,} returns sandbox id \"c6907975af0c0bdbc269066ad1e76a27d9f490ca957cc2273d8aa4caf8f8be9b\"" Mar 6 01:45:01.674262 kubelet[2555]: E0306 01:45:01.674182 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:45:01.688863 containerd[1453]: time="2026-03-06T01:45:01.687425915Z" level=info msg="CreateContainer within sandbox \"c6907975af0c0bdbc269066ad1e76a27d9f490ca957cc2273d8aa4caf8f8be9b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 6 01:45:01.719577 containerd[1453]: time="2026-03-06T01:45:01.719510660Z" level=info msg="CreateContainer within sandbox \"c6907975af0c0bdbc269066ad1e76a27d9f490ca957cc2273d8aa4caf8f8be9b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"49ab866ce735beb77f5d2b03e650bfaf53df587cf8353763c86e0053c6e24d74\"" Mar 6 01:45:01.721490 containerd[1453]: time="2026-03-06T01:45:01.720588722Z" level=info msg="StartContainer for \"49ab866ce735beb77f5d2b03e650bfaf53df587cf8353763c86e0053c6e24d74\"" Mar 6 01:45:01.720766 systemd[1]: Started cri-containerd-4694c1c9a02425a7fe98f8c4c6c6b0717a58342c33481769a9abffa719813361.scope - libcontainer container 4694c1c9a02425a7fe98f8c4c6c6b0717a58342c33481769a9abffa719813361. Mar 6 01:45:01.739031 systemd-resolved[1384]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 01:45:01.759716 systemd[1]: Started cri-containerd-49ab866ce735beb77f5d2b03e650bfaf53df587cf8353763c86e0053c6e24d74.scope - libcontainer container 49ab866ce735beb77f5d2b03e650bfaf53df587cf8353763c86e0053c6e24d74. Mar 6 01:45:01.781335 containerd[1453]: time="2026-03-06T01:45:01.781297996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zw55v,Uid:e6784957-790d-4882-8b7b-59cc89176488,Namespace:kube-system,Attempt:0,} returns sandbox id \"4694c1c9a02425a7fe98f8c4c6c6b0717a58342c33481769a9abffa719813361\"" Mar 6 01:45:01.783216 kubelet[2555]: E0306 01:45:01.783194 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:45:01.794353 containerd[1453]: time="2026-03-06T01:45:01.794149618Z" level=info msg="CreateContainer within sandbox \"4694c1c9a02425a7fe98f8c4c6c6b0717a58342c33481769a9abffa719813361\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 6 01:45:01.809757 containerd[1453]: time="2026-03-06T01:45:01.809691673Z" level=info msg="StartContainer for \"49ab866ce735beb77f5d2b03e650bfaf53df587cf8353763c86e0053c6e24d74\" returns successfully" Mar 6 01:45:01.812368 containerd[1453]: time="2026-03-06T01:45:01.812249577Z" level=info msg="CreateContainer within sandbox \"4694c1c9a02425a7fe98f8c4c6c6b0717a58342c33481769a9abffa719813361\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b78a4f440ee0ed60faa747d415dce2717333ba45b93ed85cefe91590fca1b041\"" Mar 6 01:45:01.814120 containerd[1453]: time="2026-03-06T01:45:01.812879867Z" level=info msg="StartContainer for \"b78a4f440ee0ed60faa747d415dce2717333ba45b93ed85cefe91590fca1b041\"" Mar 6 01:45:01.847672 systemd[1]: Started cri-containerd-b78a4f440ee0ed60faa747d415dce2717333ba45b93ed85cefe91590fca1b041.scope - libcontainer container b78a4f440ee0ed60faa747d415dce2717333ba45b93ed85cefe91590fca1b041. Mar 6 01:45:01.895272 containerd[1453]: time="2026-03-06T01:45:01.895208552Z" level=info msg="StartContainer for \"b78a4f440ee0ed60faa747d415dce2717333ba45b93ed85cefe91590fca1b041\" returns successfully" Mar 6 01:45:02.138625 kubelet[2555]: E0306 01:45:02.138555 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:45:02.144046 kubelet[2555]: E0306 01:45:02.143900 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:45:02.437900 kubelet[2555]: I0306 01:45:02.437675 2555 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-j4j4c" podStartSLOduration=30.437647675 podStartE2EDuration="30.437647675s" podCreationTimestamp="2026-03-06 01:44:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 01:45:02.300642303 +0000 UTC m=+36.804394557" watchObservedRunningTime="2026-03-06 01:45:02.437647675 +0000 UTC m=+36.941399929" Mar 6 01:45:02.453868 kubelet[2555]: I0306 01:45:02.453160 2555 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-zw55v" podStartSLOduration=30.45313973 podStartE2EDuration="30.45313973s" podCreationTimestamp="2026-03-06 01:44:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 01:45:02.451644327 +0000 UTC m=+36.955396571" watchObservedRunningTime="2026-03-06 01:45:02.45313973 +0000 UTC m=+36.956891994" Mar 6 01:45:03.145736 kubelet[2555]: E0306 01:45:03.145637 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:45:03.145736 kubelet[2555]: E0306 01:45:03.145666 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:45:04.179450 kubelet[2555]: E0306 01:45:04.179414 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:45:04.179993 kubelet[2555]: E0306 01:45:04.179682 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:45:06.381019 systemd[1]: Started sshd@9-10.0.0.105:22-10.0.0.1:47434.service - OpenSSH per-connection server daemon (10.0.0.1:47434). Mar 6 01:45:06.440480 sshd[3957]: Accepted publickey for core from 10.0.0.1 port 47434 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:45:06.442530 sshd[3957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:45:06.448170 systemd-logind[1442]: New session 10 of user core. Mar 6 01:45:06.456636 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 6 01:45:06.608972 sshd[3957]: pam_unix(sshd:session): session closed for user core Mar 6 01:45:06.613804 systemd[1]: sshd@9-10.0.0.105:22-10.0.0.1:47434.service: Deactivated successfully. Mar 6 01:45:06.616580 systemd[1]: session-10.scope: Deactivated successfully. Mar 6 01:45:06.617613 systemd-logind[1442]: Session 10 logged out. Waiting for processes to exit. Mar 6 01:45:06.619251 systemd-logind[1442]: Removed session 10. Mar 6 01:45:11.628363 systemd[1]: Started sshd@10-10.0.0.105:22-10.0.0.1:52256.service - OpenSSH per-connection server daemon (10.0.0.1:52256). Mar 6 01:45:11.677499 sshd[3973]: Accepted publickey for core from 10.0.0.1 port 52256 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:45:11.679949 sshd[3973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:45:11.685728 systemd-logind[1442]: New session 11 of user core. Mar 6 01:45:11.699666 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 6 01:45:11.837483 sshd[3973]: pam_unix(sshd:session): session closed for user core Mar 6 01:45:11.842343 systemd[1]: sshd@10-10.0.0.105:22-10.0.0.1:52256.service: Deactivated successfully. Mar 6 01:45:11.845027 systemd[1]: session-11.scope: Deactivated successfully. Mar 6 01:45:11.846064 systemd-logind[1442]: Session 11 logged out. Waiting for processes to exit. Mar 6 01:45:11.847632 systemd-logind[1442]: Removed session 11. Mar 6 01:45:16.870345 systemd[1]: Started sshd@11-10.0.0.105:22-10.0.0.1:52260.service - OpenSSH per-connection server daemon (10.0.0.1:52260). Mar 6 01:45:16.944730 sshd[3988]: Accepted publickey for core from 10.0.0.1 port 52260 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:45:16.947234 sshd[3988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:45:16.954034 systemd-logind[1442]: New session 12 of user core. Mar 6 01:45:16.969637 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 6 01:45:17.110575 sshd[3988]: pam_unix(sshd:session): session closed for user core Mar 6 01:45:17.125336 systemd[1]: sshd@11-10.0.0.105:22-10.0.0.1:52260.service: Deactivated successfully. Mar 6 01:45:17.128151 systemd[1]: session-12.scope: Deactivated successfully. Mar 6 01:45:17.130529 systemd-logind[1442]: Session 12 logged out. Waiting for processes to exit. Mar 6 01:45:17.141989 systemd[1]: Started sshd@12-10.0.0.105:22-10.0.0.1:52272.service - OpenSSH per-connection server daemon (10.0.0.1:52272). Mar 6 01:45:17.143217 systemd-logind[1442]: Removed session 12. Mar 6 01:45:17.184810 sshd[4003]: Accepted publickey for core from 10.0.0.1 port 52272 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:45:17.186696 sshd[4003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:45:17.195176 systemd-logind[1442]: New session 13 of user core. Mar 6 01:45:17.211674 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 6 01:45:17.429818 sshd[4003]: pam_unix(sshd:session): session closed for user core Mar 6 01:45:17.444154 systemd[1]: sshd@12-10.0.0.105:22-10.0.0.1:52272.service: Deactivated successfully. Mar 6 01:45:17.448746 systemd[1]: session-13.scope: Deactivated successfully. Mar 6 01:45:17.453167 systemd-logind[1442]: Session 13 logged out. Waiting for processes to exit. Mar 6 01:45:17.464534 systemd[1]: Started sshd@13-10.0.0.105:22-10.0.0.1:52280.service - OpenSSH per-connection server daemon (10.0.0.1:52280). Mar 6 01:45:17.466800 systemd-logind[1442]: Removed session 13. Mar 6 01:45:17.501099 sshd[4015]: Accepted publickey for core from 10.0.0.1 port 52280 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:45:17.502423 sshd[4015]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:45:17.508566 systemd-logind[1442]: New session 14 of user core. Mar 6 01:45:17.515610 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 6 01:45:17.639774 sshd[4015]: pam_unix(sshd:session): session closed for user core Mar 6 01:45:17.644840 systemd[1]: sshd@13-10.0.0.105:22-10.0.0.1:52280.service: Deactivated successfully. Mar 6 01:45:17.647248 systemd[1]: session-14.scope: Deactivated successfully. Mar 6 01:45:17.648215 systemd-logind[1442]: Session 14 logged out. Waiting for processes to exit. Mar 6 01:45:17.649984 systemd-logind[1442]: Removed session 14. Mar 6 01:45:22.654964 systemd[1]: Started sshd@14-10.0.0.105:22-10.0.0.1:46862.service - OpenSSH per-connection server daemon (10.0.0.1:46862). Mar 6 01:45:22.700324 sshd[4031]: Accepted publickey for core from 10.0.0.1 port 46862 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:45:22.702813 sshd[4031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:45:22.708723 systemd-logind[1442]: New session 15 of user core. Mar 6 01:45:22.718706 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 6 01:45:22.854367 sshd[4031]: pam_unix(sshd:session): session closed for user core Mar 6 01:45:22.859572 systemd[1]: sshd@14-10.0.0.105:22-10.0.0.1:46862.service: Deactivated successfully. Mar 6 01:45:22.862293 systemd[1]: session-15.scope: Deactivated successfully. Mar 6 01:45:22.863278 systemd-logind[1442]: Session 15 logged out. Waiting for processes to exit. Mar 6 01:45:22.865367 systemd-logind[1442]: Removed session 15. Mar 6 01:45:27.890047 systemd[1]: Started sshd@15-10.0.0.105:22-10.0.0.1:46874.service - OpenSSH per-connection server daemon (10.0.0.1:46874). Mar 6 01:45:27.944763 sshd[4047]: Accepted publickey for core from 10.0.0.1 port 46874 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:45:27.948039 sshd[4047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:45:27.956272 systemd-logind[1442]: New session 16 of user core. Mar 6 01:45:27.961733 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 6 01:45:28.115079 sshd[4047]: pam_unix(sshd:session): session closed for user core Mar 6 01:45:28.134122 systemd[1]: sshd@15-10.0.0.105:22-10.0.0.1:46874.service: Deactivated successfully. Mar 6 01:45:28.137021 systemd[1]: session-16.scope: Deactivated successfully. Mar 6 01:45:28.139727 systemd-logind[1442]: Session 16 logged out. Waiting for processes to exit. Mar 6 01:45:28.146061 systemd[1]: Started sshd@16-10.0.0.105:22-10.0.0.1:46890.service - OpenSSH per-connection server daemon (10.0.0.1:46890). Mar 6 01:45:28.147790 systemd-logind[1442]: Removed session 16. Mar 6 01:45:28.220041 sshd[4061]: Accepted publickey for core from 10.0.0.1 port 46890 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:45:28.222797 sshd[4061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:45:28.232682 systemd-logind[1442]: New session 17 of user core. Mar 6 01:45:28.243901 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 6 01:45:28.601643 sshd[4061]: pam_unix(sshd:session): session closed for user core Mar 6 01:45:28.614999 systemd[1]: sshd@16-10.0.0.105:22-10.0.0.1:46890.service: Deactivated successfully. Mar 6 01:45:28.617201 systemd[1]: session-17.scope: Deactivated successfully. Mar 6 01:45:28.619112 systemd-logind[1442]: Session 17 logged out. Waiting for processes to exit. Mar 6 01:45:28.620921 systemd[1]: Started sshd@17-10.0.0.105:22-10.0.0.1:46902.service - OpenSSH per-connection server daemon (10.0.0.1:46902). Mar 6 01:45:28.622164 systemd-logind[1442]: Removed session 17. Mar 6 01:45:28.692854 sshd[4073]: Accepted publickey for core from 10.0.0.1 port 46902 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:45:28.694926 sshd[4073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:45:28.700995 systemd-logind[1442]: New session 18 of user core. Mar 6 01:45:28.711768 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 6 01:45:29.653957 sshd[4073]: pam_unix(sshd:session): session closed for user core Mar 6 01:45:29.663380 systemd[1]: sshd@17-10.0.0.105:22-10.0.0.1:46902.service: Deactivated successfully. Mar 6 01:45:29.667185 systemd[1]: session-18.scope: Deactivated successfully. Mar 6 01:45:29.669024 systemd-logind[1442]: Session 18 logged out. Waiting for processes to exit. Mar 6 01:45:29.681089 systemd[1]: Started sshd@18-10.0.0.105:22-10.0.0.1:46904.service - OpenSSH per-connection server daemon (10.0.0.1:46904). Mar 6 01:45:29.684522 systemd-logind[1442]: Removed session 18. Mar 6 01:45:29.726371 sshd[4091]: Accepted publickey for core from 10.0.0.1 port 46904 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:45:29.728592 sshd[4091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:45:29.737085 systemd-logind[1442]: New session 19 of user core. Mar 6 01:45:29.745755 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 6 01:45:30.102743 sshd[4091]: pam_unix(sshd:session): session closed for user core Mar 6 01:45:30.116428 systemd[1]: sshd@18-10.0.0.105:22-10.0.0.1:46904.service: Deactivated successfully. Mar 6 01:45:30.121031 systemd[1]: session-19.scope: Deactivated successfully. Mar 6 01:45:30.123874 systemd-logind[1442]: Session 19 logged out. Waiting for processes to exit. Mar 6 01:45:30.135132 systemd[1]: Started sshd@19-10.0.0.105:22-10.0.0.1:42694.service - OpenSSH per-connection server daemon (10.0.0.1:42694). Mar 6 01:45:30.137192 systemd-logind[1442]: Removed session 19. Mar 6 01:45:30.181213 sshd[4104]: Accepted publickey for core from 10.0.0.1 port 42694 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:45:30.183835 sshd[4104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:45:30.191668 systemd-logind[1442]: New session 20 of user core. Mar 6 01:45:30.202829 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 6 01:45:30.361277 sshd[4104]: pam_unix(sshd:session): session closed for user core Mar 6 01:45:30.369376 systemd[1]: sshd@19-10.0.0.105:22-10.0.0.1:42694.service: Deactivated successfully. Mar 6 01:45:30.371810 systemd[1]: session-20.scope: Deactivated successfully. Mar 6 01:45:30.372986 systemd-logind[1442]: Session 20 logged out. Waiting for processes to exit. Mar 6 01:45:30.374718 systemd-logind[1442]: Removed session 20. Mar 6 01:45:35.376766 systemd[1]: Started sshd@20-10.0.0.105:22-10.0.0.1:42700.service - OpenSSH per-connection server daemon (10.0.0.1:42700). Mar 6 01:45:35.422644 sshd[4122]: Accepted publickey for core from 10.0.0.1 port 42700 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:45:35.424882 sshd[4122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:45:35.430733 systemd-logind[1442]: New session 21 of user core. Mar 6 01:45:35.444711 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 6 01:45:35.576771 sshd[4122]: pam_unix(sshd:session): session closed for user core Mar 6 01:45:35.582415 systemd[1]: sshd@20-10.0.0.105:22-10.0.0.1:42700.service: Deactivated successfully. Mar 6 01:45:35.584900 systemd[1]: session-21.scope: Deactivated successfully. Mar 6 01:45:35.586132 systemd-logind[1442]: Session 21 logged out. Waiting for processes to exit. Mar 6 01:45:35.588013 systemd-logind[1442]: Removed session 21. Mar 6 01:45:36.678211 kubelet[2555]: E0306 01:45:36.678124 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:45:40.589906 systemd[1]: Started sshd@21-10.0.0.105:22-10.0.0.1:47484.service - OpenSSH per-connection server daemon (10.0.0.1:47484). Mar 6 01:45:40.639990 sshd[4138]: Accepted publickey for core from 10.0.0.1 port 47484 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:45:40.642511 sshd[4138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:45:40.649083 systemd-logind[1442]: New session 22 of user core. Mar 6 01:45:40.662862 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 6 01:45:40.803689 sshd[4138]: pam_unix(sshd:session): session closed for user core Mar 6 01:45:40.810117 systemd[1]: sshd@21-10.0.0.105:22-10.0.0.1:47484.service: Deactivated successfully. Mar 6 01:45:40.812987 systemd[1]: session-22.scope: Deactivated successfully. Mar 6 01:45:40.814647 systemd-logind[1442]: Session 22 logged out. Waiting for processes to exit. Mar 6 01:45:40.816659 systemd-logind[1442]: Removed session 22. Mar 6 01:45:41.678490 kubelet[2555]: E0306 01:45:41.678397 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:45:45.816092 systemd[1]: Started sshd@22-10.0.0.105:22-10.0.0.1:47490.service - OpenSSH per-connection server daemon (10.0.0.1:47490). Mar 6 01:45:45.858151 sshd[4152]: Accepted publickey for core from 10.0.0.1 port 47490 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:45:45.859919 sshd[4152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:45:45.865556 systemd-logind[1442]: New session 23 of user core. Mar 6 01:45:45.877756 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 6 01:45:46.016585 sshd[4152]: pam_unix(sshd:session): session closed for user core Mar 6 01:45:46.027485 systemd[1]: sshd@22-10.0.0.105:22-10.0.0.1:47490.service: Deactivated successfully. Mar 6 01:45:46.029272 systemd[1]: session-23.scope: Deactivated successfully. Mar 6 01:45:46.031218 systemd-logind[1442]: Session 23 logged out. Waiting for processes to exit. Mar 6 01:45:46.032659 systemd[1]: Started sshd@23-10.0.0.105:22-10.0.0.1:47494.service - OpenSSH per-connection server daemon (10.0.0.1:47494). Mar 6 01:45:46.034170 systemd-logind[1442]: Removed session 23. Mar 6 01:45:46.095978 sshd[4166]: Accepted publickey for core from 10.0.0.1 port 47494 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:45:46.098188 sshd[4166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:45:46.103377 systemd-logind[1442]: New session 24 of user core. Mar 6 01:45:46.111672 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 6 01:45:47.613555 containerd[1453]: time="2026-03-06T01:45:47.613182108Z" level=info msg="StopContainer for \"39903bbfa8c4909edc7f695c326552e9bd621f8bdd8f05ea2bbd7081514bf123\" with timeout 30 (s)" Mar 6 01:45:47.614729 containerd[1453]: time="2026-03-06T01:45:47.614595427Z" level=info msg="Stop container \"39903bbfa8c4909edc7f695c326552e9bd621f8bdd8f05ea2bbd7081514bf123\" with signal terminated" Mar 6 01:45:47.640791 systemd[1]: cri-containerd-39903bbfa8c4909edc7f695c326552e9bd621f8bdd8f05ea2bbd7081514bf123.scope: Deactivated successfully. Mar 6 01:45:47.670157 systemd[1]: run-containerd-runc-k8s.io-9ec7efceb4864137ae1857e8b00d18e71f9b64c6fcf20635472889089b6de2ef-runc.EWBLM5.mount: Deactivated successfully. Mar 6 01:45:47.684503 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-39903bbfa8c4909edc7f695c326552e9bd621f8bdd8f05ea2bbd7081514bf123-rootfs.mount: Deactivated successfully. Mar 6 01:45:47.693427 containerd[1453]: time="2026-03-06T01:45:47.693373081Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 6 01:45:47.697414 containerd[1453]: time="2026-03-06T01:45:47.697245794Z" level=info msg="shim disconnected" id=39903bbfa8c4909edc7f695c326552e9bd621f8bdd8f05ea2bbd7081514bf123 namespace=k8s.io Mar 6 01:45:47.697414 containerd[1453]: time="2026-03-06T01:45:47.697362924Z" level=warning msg="cleaning up after shim disconnected" id=39903bbfa8c4909edc7f695c326552e9bd621f8bdd8f05ea2bbd7081514bf123 namespace=k8s.io Mar 6 01:45:47.697414 containerd[1453]: time="2026-03-06T01:45:47.697379194Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 6 01:45:47.701788 containerd[1453]: time="2026-03-06T01:45:47.700534416Z" level=info msg="StopContainer for \"9ec7efceb4864137ae1857e8b00d18e71f9b64c6fcf20635472889089b6de2ef\" with timeout 2 (s)" Mar 6 01:45:47.701788 containerd[1453]: time="2026-03-06T01:45:47.700954091Z" level=info msg="Stop container \"9ec7efceb4864137ae1857e8b00d18e71f9b64c6fcf20635472889089b6de2ef\" with signal terminated" Mar 6 01:45:47.712164 systemd-networkd[1383]: lxc_health: Link DOWN Mar 6 01:45:47.712744 systemd-networkd[1383]: lxc_health: Lost carrier Mar 6 01:45:47.726864 containerd[1453]: time="2026-03-06T01:45:47.726694511Z" level=info msg="StopContainer for \"39903bbfa8c4909edc7f695c326552e9bd621f8bdd8f05ea2bbd7081514bf123\" returns successfully" Mar 6 01:45:47.728693 containerd[1453]: time="2026-03-06T01:45:47.728654351Z" level=info msg="StopPodSandbox for \"b2b67fc2d5a5b7ef6648c8bb0fbf3732ebf1bef8d3090ce708caeb4fc7d80f9a\"" Mar 6 01:45:47.728693 containerd[1453]: time="2026-03-06T01:45:47.728691701Z" level=info msg="Container to stop \"39903bbfa8c4909edc7f695c326552e9bd621f8bdd8f05ea2bbd7081514bf123\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 6 01:45:47.730629 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b2b67fc2d5a5b7ef6648c8bb0fbf3732ebf1bef8d3090ce708caeb4fc7d80f9a-shm.mount: Deactivated successfully. Mar 6 01:45:47.733382 systemd[1]: cri-containerd-9ec7efceb4864137ae1857e8b00d18e71f9b64c6fcf20635472889089b6de2ef.scope: Deactivated successfully. Mar 6 01:45:47.733760 systemd[1]: cri-containerd-9ec7efceb4864137ae1857e8b00d18e71f9b64c6fcf20635472889089b6de2ef.scope: Consumed 10.376s CPU time. Mar 6 01:45:47.750801 systemd[1]: cri-containerd-b2b67fc2d5a5b7ef6648c8bb0fbf3732ebf1bef8d3090ce708caeb4fc7d80f9a.scope: Deactivated successfully. Mar 6 01:45:47.761978 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9ec7efceb4864137ae1857e8b00d18e71f9b64c6fcf20635472889089b6de2ef-rootfs.mount: Deactivated successfully. Mar 6 01:45:47.771679 containerd[1453]: time="2026-03-06T01:45:47.771600177Z" level=info msg="shim disconnected" id=9ec7efceb4864137ae1857e8b00d18e71f9b64c6fcf20635472889089b6de2ef namespace=k8s.io Mar 6 01:45:47.771679 containerd[1453]: time="2026-03-06T01:45:47.771674978Z" level=warning msg="cleaning up after shim disconnected" id=9ec7efceb4864137ae1857e8b00d18e71f9b64c6fcf20635472889089b6de2ef namespace=k8s.io Mar 6 01:45:47.771679 containerd[1453]: time="2026-03-06T01:45:47.771684776Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 6 01:45:47.789731 containerd[1453]: time="2026-03-06T01:45:47.789542937Z" level=info msg="shim disconnected" id=b2b67fc2d5a5b7ef6648c8bb0fbf3732ebf1bef8d3090ce708caeb4fc7d80f9a namespace=k8s.io Mar 6 01:45:47.789731 containerd[1453]: time="2026-03-06T01:45:47.789594503Z" level=warning msg="cleaning up after shim disconnected" id=b2b67fc2d5a5b7ef6648c8bb0fbf3732ebf1bef8d3090ce708caeb4fc7d80f9a namespace=k8s.io Mar 6 01:45:47.789731 containerd[1453]: time="2026-03-06T01:45:47.789604221Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 6 01:45:47.792395 containerd[1453]: time="2026-03-06T01:45:47.790597191Z" level=warning msg="cleanup warnings time=\"2026-03-06T01:45:47Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 6 01:45:47.794843 containerd[1453]: time="2026-03-06T01:45:47.794785634Z" level=info msg="StopContainer for \"9ec7efceb4864137ae1857e8b00d18e71f9b64c6fcf20635472889089b6de2ef\" returns successfully" Mar 6 01:45:47.795795 containerd[1453]: time="2026-03-06T01:45:47.795616741Z" level=info msg="StopPodSandbox for \"49d086317879d90eff0b627a71d96962200f2e39fe151d79f597b0cc36c15e81\"" Mar 6 01:45:47.795795 containerd[1453]: time="2026-03-06T01:45:47.795657869Z" level=info msg="Container to stop \"36f48350f108084045eccfb3d5096cef28e62dc5fa897d070cbae1217ec96455\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 6 01:45:47.795795 containerd[1453]: time="2026-03-06T01:45:47.795669290Z" level=info msg="Container to stop \"9ec7efceb4864137ae1857e8b00d18e71f9b64c6fcf20635472889089b6de2ef\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 6 01:45:47.795795 containerd[1453]: time="2026-03-06T01:45:47.795680091Z" level=info msg="Container to stop \"9fbcbb469558369feab798c6653b6125c484c6cdfa55506b607f4fd941902675\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 6 01:45:47.795795 containerd[1453]: time="2026-03-06T01:45:47.795690029Z" level=info msg="Container to stop \"683acde366dd8873de520ceeb042452bd4505e5b56081825dcc117a028b8c479\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 6 01:45:47.795795 containerd[1453]: time="2026-03-06T01:45:47.795698515Z" level=info msg="Container to stop \"d097b227ae3c805f43dd66421f93fe4e59eaaa1419c41b032bce0d82c8374893\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 6 01:45:47.805081 systemd[1]: cri-containerd-49d086317879d90eff0b627a71d96962200f2e39fe151d79f597b0cc36c15e81.scope: Deactivated successfully. Mar 6 01:45:47.821548 containerd[1453]: time="2026-03-06T01:45:47.821292591Z" level=info msg="TearDown network for sandbox \"b2b67fc2d5a5b7ef6648c8bb0fbf3732ebf1bef8d3090ce708caeb4fc7d80f9a\" successfully" Mar 6 01:45:47.821548 containerd[1453]: time="2026-03-06T01:45:47.821362260Z" level=info msg="StopPodSandbox for \"b2b67fc2d5a5b7ef6648c8bb0fbf3732ebf1bef8d3090ce708caeb4fc7d80f9a\" returns successfully" Mar 6 01:45:47.838996 containerd[1453]: time="2026-03-06T01:45:47.838848895Z" level=info msg="shim disconnected" id=49d086317879d90eff0b627a71d96962200f2e39fe151d79f597b0cc36c15e81 namespace=k8s.io Mar 6 01:45:47.838996 containerd[1453]: time="2026-03-06T01:45:47.838915059Z" level=warning msg="cleaning up after shim disconnected" id=49d086317879d90eff0b627a71d96962200f2e39fe151d79f597b0cc36c15e81 namespace=k8s.io Mar 6 01:45:47.838996 containerd[1453]: time="2026-03-06T01:45:47.838930247Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 6 01:45:47.856636 containerd[1453]: time="2026-03-06T01:45:47.856547908Z" level=warning msg="cleanup warnings time=\"2026-03-06T01:45:47Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 6 01:45:47.858569 containerd[1453]: time="2026-03-06T01:45:47.858420024Z" level=info msg="TearDown network for sandbox \"49d086317879d90eff0b627a71d96962200f2e39fe151d79f597b0cc36c15e81\" successfully" Mar 6 01:45:47.858569 containerd[1453]: time="2026-03-06T01:45:47.858557542Z" level=info msg="StopPodSandbox for \"49d086317879d90eff0b627a71d96962200f2e39fe151d79f597b0cc36c15e81\" returns successfully" Mar 6 01:45:47.933882 kubelet[2555]: I0306 01:45:47.933696 2555 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0863c04f-8c11-43b1-a4a0-c84fc353665d-clustermesh-secrets\") pod \"0863c04f-8c11-43b1-a4a0-c84fc353665d\" (UID: \"0863c04f-8c11-43b1-a4a0-c84fc353665d\") " Mar 6 01:45:47.933882 kubelet[2555]: I0306 01:45:47.933754 2555 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0863c04f-8c11-43b1-a4a0-c84fc353665d-cilium-run\") pod \"0863c04f-8c11-43b1-a4a0-c84fc353665d\" (UID: \"0863c04f-8c11-43b1-a4a0-c84fc353665d\") " Mar 6 01:45:47.933882 kubelet[2555]: I0306 01:45:47.933775 2555 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0863c04f-8c11-43b1-a4a0-c84fc353665d-cilium-cgroup\") pod \"0863c04f-8c11-43b1-a4a0-c84fc353665d\" (UID: \"0863c04f-8c11-43b1-a4a0-c84fc353665d\") " Mar 6 01:45:47.933882 kubelet[2555]: I0306 01:45:47.933802 2555 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqc6q\" (UniqueName: \"kubernetes.io/projected/aa8348b8-0358-4c84-a119-9e16eec798fe-kube-api-access-nqc6q\") pod \"aa8348b8-0358-4c84-a119-9e16eec798fe\" (UID: \"aa8348b8-0358-4c84-a119-9e16eec798fe\") " Mar 6 01:45:47.933882 kubelet[2555]: I0306 01:45:47.933828 2555 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0863c04f-8c11-43b1-a4a0-c84fc353665d-xtables-lock\") pod \"0863c04f-8c11-43b1-a4a0-c84fc353665d\" (UID: \"0863c04f-8c11-43b1-a4a0-c84fc353665d\") " Mar 6 01:45:47.933882 kubelet[2555]: I0306 01:45:47.933852 2555 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0863c04f-8c11-43b1-a4a0-c84fc353665d-cni-path\") pod \"0863c04f-8c11-43b1-a4a0-c84fc353665d\" (UID: \"0863c04f-8c11-43b1-a4a0-c84fc353665d\") " Mar 6 01:45:47.934704 kubelet[2555]: I0306 01:45:47.933872 2555 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0863c04f-8c11-43b1-a4a0-c84fc353665d-lib-modules\") pod \"0863c04f-8c11-43b1-a4a0-c84fc353665d\" (UID: \"0863c04f-8c11-43b1-a4a0-c84fc353665d\") " Mar 6 01:45:47.934704 kubelet[2555]: I0306 01:45:47.933896 2555 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0863c04f-8c11-43b1-a4a0-c84fc353665d-etc-cni-netd\") pod \"0863c04f-8c11-43b1-a4a0-c84fc353665d\" (UID: \"0863c04f-8c11-43b1-a4a0-c84fc353665d\") " Mar 6 01:45:47.934704 kubelet[2555]: I0306 01:45:47.933920 2555 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfz29\" (UniqueName: \"kubernetes.io/projected/0863c04f-8c11-43b1-a4a0-c84fc353665d-kube-api-access-cfz29\") pod \"0863c04f-8c11-43b1-a4a0-c84fc353665d\" (UID: \"0863c04f-8c11-43b1-a4a0-c84fc353665d\") " Mar 6 01:45:47.934704 kubelet[2555]: I0306 01:45:47.933943 2555 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0863c04f-8c11-43b1-a4a0-c84fc353665d-hostproc\") pod \"0863c04f-8c11-43b1-a4a0-c84fc353665d\" (UID: \"0863c04f-8c11-43b1-a4a0-c84fc353665d\") " Mar 6 01:45:47.934704 kubelet[2555]: I0306 01:45:47.933972 2555 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0863c04f-8c11-43b1-a4a0-c84fc353665d-cilium-config-path\") pod \"0863c04f-8c11-43b1-a4a0-c84fc353665d\" (UID: \"0863c04f-8c11-43b1-a4a0-c84fc353665d\") " Mar 6 01:45:47.934704 kubelet[2555]: I0306 01:45:47.933997 2555 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0863c04f-8c11-43b1-a4a0-c84fc353665d-hubble-tls\") pod \"0863c04f-8c11-43b1-a4a0-c84fc353665d\" (UID: \"0863c04f-8c11-43b1-a4a0-c84fc353665d\") " Mar 6 01:45:47.934839 kubelet[2555]: I0306 01:45:47.934016 2555 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0863c04f-8c11-43b1-a4a0-c84fc353665d-host-proc-sys-kernel\") pod \"0863c04f-8c11-43b1-a4a0-c84fc353665d\" (UID: \"0863c04f-8c11-43b1-a4a0-c84fc353665d\") " Mar 6 01:45:47.934839 kubelet[2555]: I0306 01:45:47.934039 2555 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aa8348b8-0358-4c84-a119-9e16eec798fe-cilium-config-path\") pod \"aa8348b8-0358-4c84-a119-9e16eec798fe\" (UID: \"aa8348b8-0358-4c84-a119-9e16eec798fe\") " Mar 6 01:45:47.934839 kubelet[2555]: I0306 01:45:47.934061 2555 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0863c04f-8c11-43b1-a4a0-c84fc353665d-host-proc-sys-net\") pod \"0863c04f-8c11-43b1-a4a0-c84fc353665d\" (UID: \"0863c04f-8c11-43b1-a4a0-c84fc353665d\") " Mar 6 01:45:47.934839 kubelet[2555]: I0306 01:45:47.934081 2555 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0863c04f-8c11-43b1-a4a0-c84fc353665d-bpf-maps\") pod \"0863c04f-8c11-43b1-a4a0-c84fc353665d\" (UID: \"0863c04f-8c11-43b1-a4a0-c84fc353665d\") " Mar 6 01:45:47.934839 kubelet[2555]: I0306 01:45:47.934267 2555 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0863c04f-8c11-43b1-a4a0-c84fc353665d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0863c04f-8c11-43b1-a4a0-c84fc353665d" (UID: "0863c04f-8c11-43b1-a4a0-c84fc353665d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 6 01:45:47.939493 kubelet[2555]: I0306 01:45:47.934284 2555 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0863c04f-8c11-43b1-a4a0-c84fc353665d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0863c04f-8c11-43b1-a4a0-c84fc353665d" (UID: "0863c04f-8c11-43b1-a4a0-c84fc353665d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 6 01:45:47.939493 kubelet[2555]: I0306 01:45:47.934388 2555 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0863c04f-8c11-43b1-a4a0-c84fc353665d-hostproc" (OuterVolumeSpecName: "hostproc") pod "0863c04f-8c11-43b1-a4a0-c84fc353665d" (UID: "0863c04f-8c11-43b1-a4a0-c84fc353665d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 6 01:45:47.939493 kubelet[2555]: I0306 01:45:47.936879 2555 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0863c04f-8c11-43b1-a4a0-c84fc353665d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0863c04f-8c11-43b1-a4a0-c84fc353665d" (UID: "0863c04f-8c11-43b1-a4a0-c84fc353665d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 6 01:45:47.939493 kubelet[2555]: I0306 01:45:47.936961 2555 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0863c04f-8c11-43b1-a4a0-c84fc353665d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0863c04f-8c11-43b1-a4a0-c84fc353665d" (UID: "0863c04f-8c11-43b1-a4a0-c84fc353665d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 6 01:45:47.939493 kubelet[2555]: I0306 01:45:47.938866 2555 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0863c04f-8c11-43b1-a4a0-c84fc353665d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0863c04f-8c11-43b1-a4a0-c84fc353665d" (UID: "0863c04f-8c11-43b1-a4a0-c84fc353665d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 6 01:45:47.940360 kubelet[2555]: I0306 01:45:47.939989 2555 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0863c04f-8c11-43b1-a4a0-c84fc353665d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0863c04f-8c11-43b1-a4a0-c84fc353665d" (UID: "0863c04f-8c11-43b1-a4a0-c84fc353665d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 6 01:45:47.940360 kubelet[2555]: I0306 01:45:47.940022 2555 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0863c04f-8c11-43b1-a4a0-c84fc353665d-cni-path" (OuterVolumeSpecName: "cni-path") pod "0863c04f-8c11-43b1-a4a0-c84fc353665d" (UID: "0863c04f-8c11-43b1-a4a0-c84fc353665d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 6 01:45:47.940360 kubelet[2555]: I0306 01:45:47.940038 2555 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0863c04f-8c11-43b1-a4a0-c84fc353665d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0863c04f-8c11-43b1-a4a0-c84fc353665d" (UID: "0863c04f-8c11-43b1-a4a0-c84fc353665d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 6 01:45:47.940691 kubelet[2555]: I0306 01:45:47.940672 2555 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0863c04f-8c11-43b1-a4a0-c84fc353665d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0863c04f-8c11-43b1-a4a0-c84fc353665d" (UID: "0863c04f-8c11-43b1-a4a0-c84fc353665d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 6 01:45:47.940764 kubelet[2555]: I0306 01:45:47.940751 2555 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0863c04f-8c11-43b1-a4a0-c84fc353665d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0863c04f-8c11-43b1-a4a0-c84fc353665d" (UID: "0863c04f-8c11-43b1-a4a0-c84fc353665d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 6 01:45:47.942972 kubelet[2555]: I0306 01:45:47.942922 2555 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0863c04f-8c11-43b1-a4a0-c84fc353665d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0863c04f-8c11-43b1-a4a0-c84fc353665d" (UID: "0863c04f-8c11-43b1-a4a0-c84fc353665d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 6 01:45:47.943227 kubelet[2555]: I0306 01:45:47.943106 2555 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0863c04f-8c11-43b1-a4a0-c84fc353665d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0863c04f-8c11-43b1-a4a0-c84fc353665d" (UID: "0863c04f-8c11-43b1-a4a0-c84fc353665d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 6 01:45:47.945713 kubelet[2555]: I0306 01:45:47.945634 2555 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa8348b8-0358-4c84-a119-9e16eec798fe-kube-api-access-nqc6q" (OuterVolumeSpecName: "kube-api-access-nqc6q") pod "aa8348b8-0358-4c84-a119-9e16eec798fe" (UID: "aa8348b8-0358-4c84-a119-9e16eec798fe"). InnerVolumeSpecName "kube-api-access-nqc6q". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 6 01:45:47.946355 kubelet[2555]: I0306 01:45:47.946272 2555 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0863c04f-8c11-43b1-a4a0-c84fc353665d-kube-api-access-cfz29" (OuterVolumeSpecName: "kube-api-access-cfz29") pod "0863c04f-8c11-43b1-a4a0-c84fc353665d" (UID: "0863c04f-8c11-43b1-a4a0-c84fc353665d"). InnerVolumeSpecName "kube-api-access-cfz29". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 6 01:45:47.946419 kubelet[2555]: I0306 01:45:47.946286 2555 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa8348b8-0358-4c84-a119-9e16eec798fe-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "aa8348b8-0358-4c84-a119-9e16eec798fe" (UID: "aa8348b8-0358-4c84-a119-9e16eec798fe"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 6 01:45:48.034792 kubelet[2555]: I0306 01:45:48.034668 2555 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0863c04f-8c11-43b1-a4a0-c84fc353665d-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 6 01:45:48.034792 kubelet[2555]: I0306 01:45:48.034715 2555 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0863c04f-8c11-43b1-a4a0-c84fc353665d-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 6 01:45:48.034792 kubelet[2555]: I0306 01:45:48.034724 2555 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0863c04f-8c11-43b1-a4a0-c84fc353665d-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 6 01:45:48.034792 kubelet[2555]: I0306 01:45:48.034732 2555 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0863c04f-8c11-43b1-a4a0-c84fc353665d-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 6 01:45:48.034792 kubelet[2555]: I0306 01:45:48.034739 2555 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0863c04f-8c11-43b1-a4a0-c84fc353665d-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 6 01:45:48.034792 kubelet[2555]: I0306 01:45:48.034747 2555 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nqc6q\" (UniqueName: \"kubernetes.io/projected/aa8348b8-0358-4c84-a119-9e16eec798fe-kube-api-access-nqc6q\") on node \"localhost\" DevicePath \"\"" Mar 6 01:45:48.034792 kubelet[2555]: I0306 01:45:48.034755 2555 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0863c04f-8c11-43b1-a4a0-c84fc353665d-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 6 01:45:48.034792 kubelet[2555]: I0306 01:45:48.034762 2555 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0863c04f-8c11-43b1-a4a0-c84fc353665d-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 6 01:45:48.035262 kubelet[2555]: I0306 01:45:48.034771 2555 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0863c04f-8c11-43b1-a4a0-c84fc353665d-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 6 01:45:48.035262 kubelet[2555]: I0306 01:45:48.034778 2555 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0863c04f-8c11-43b1-a4a0-c84fc353665d-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 6 01:45:48.035262 kubelet[2555]: I0306 01:45:48.034785 2555 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cfz29\" (UniqueName: \"kubernetes.io/projected/0863c04f-8c11-43b1-a4a0-c84fc353665d-kube-api-access-cfz29\") on node \"localhost\" DevicePath \"\"" Mar 6 01:45:48.035262 kubelet[2555]: I0306 01:45:48.034792 2555 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0863c04f-8c11-43b1-a4a0-c84fc353665d-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 6 01:45:48.035262 kubelet[2555]: I0306 01:45:48.034800 2555 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0863c04f-8c11-43b1-a4a0-c84fc353665d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 6 01:45:48.035262 kubelet[2555]: I0306 01:45:48.034807 2555 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0863c04f-8c11-43b1-a4a0-c84fc353665d-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 6 01:45:48.035262 kubelet[2555]: I0306 01:45:48.034814 2555 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0863c04f-8c11-43b1-a4a0-c84fc353665d-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 6 01:45:48.035262 kubelet[2555]: I0306 01:45:48.034822 2555 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aa8348b8-0358-4c84-a119-9e16eec798fe-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 6 01:45:48.338866 kubelet[2555]: I0306 01:45:48.338710 2555 scope.go:117] "RemoveContainer" containerID="9ec7efceb4864137ae1857e8b00d18e71f9b64c6fcf20635472889089b6de2ef" Mar 6 01:45:48.341114 containerd[1453]: time="2026-03-06T01:45:48.340986106Z" level=info msg="RemoveContainer for \"9ec7efceb4864137ae1857e8b00d18e71f9b64c6fcf20635472889089b6de2ef\"" Mar 6 01:45:48.357569 systemd[1]: Removed slice kubepods-burstable-pod0863c04f_8c11_43b1_a4a0_c84fc353665d.slice - libcontainer container kubepods-burstable-pod0863c04f_8c11_43b1_a4a0_c84fc353665d.slice. Mar 6 01:45:48.357812 systemd[1]: kubepods-burstable-pod0863c04f_8c11_43b1_a4a0_c84fc353665d.slice: Consumed 10.575s CPU time. Mar 6 01:45:48.361647 containerd[1453]: time="2026-03-06T01:45:48.361566345Z" level=info msg="RemoveContainer for \"9ec7efceb4864137ae1857e8b00d18e71f9b64c6fcf20635472889089b6de2ef\" returns successfully" Mar 6 01:45:48.362496 kubelet[2555]: I0306 01:45:48.362413 2555 scope.go:117] "RemoveContainer" containerID="d097b227ae3c805f43dd66421f93fe4e59eaaa1419c41b032bce0d82c8374893" Mar 6 01:45:48.363303 systemd[1]: Removed slice kubepods-besteffort-podaa8348b8_0358_4c84_a119_9e16eec798fe.slice - libcontainer container kubepods-besteffort-podaa8348b8_0358_4c84_a119_9e16eec798fe.slice. Mar 6 01:45:48.365539 containerd[1453]: time="2026-03-06T01:45:48.365227535Z" level=info msg="RemoveContainer for \"d097b227ae3c805f43dd66421f93fe4e59eaaa1419c41b032bce0d82c8374893\"" Mar 6 01:45:48.372427 containerd[1453]: time="2026-03-06T01:45:48.372300530Z" level=info msg="RemoveContainer for \"d097b227ae3c805f43dd66421f93fe4e59eaaa1419c41b032bce0d82c8374893\" returns successfully" Mar 6 01:45:48.372839 kubelet[2555]: I0306 01:45:48.372712 2555 scope.go:117] "RemoveContainer" containerID="683acde366dd8873de520ceeb042452bd4505e5b56081825dcc117a028b8c479" Mar 6 01:45:48.382799 containerd[1453]: time="2026-03-06T01:45:48.382718734Z" level=info msg="RemoveContainer for \"683acde366dd8873de520ceeb042452bd4505e5b56081825dcc117a028b8c479\"" Mar 6 01:45:48.402507 containerd[1453]: time="2026-03-06T01:45:48.398365327Z" level=info msg="RemoveContainer for \"683acde366dd8873de520ceeb042452bd4505e5b56081825dcc117a028b8c479\" returns successfully" Mar 6 01:45:48.402634 kubelet[2555]: I0306 01:45:48.399085 2555 scope.go:117] "RemoveContainer" containerID="9fbcbb469558369feab798c6653b6125c484c6cdfa55506b607f4fd941902675" Mar 6 01:45:48.405811 containerd[1453]: time="2026-03-06T01:45:48.405779760Z" level=info msg="RemoveContainer for \"9fbcbb469558369feab798c6653b6125c484c6cdfa55506b607f4fd941902675\"" Mar 6 01:45:48.415702 containerd[1453]: time="2026-03-06T01:45:48.411510460Z" level=info msg="RemoveContainer for \"9fbcbb469558369feab798c6653b6125c484c6cdfa55506b607f4fd941902675\" returns successfully" Mar 6 01:45:48.415814 kubelet[2555]: I0306 01:45:48.413232 2555 scope.go:117] "RemoveContainer" containerID="36f48350f108084045eccfb3d5096cef28e62dc5fa897d070cbae1217ec96455" Mar 6 01:45:48.417425 containerd[1453]: time="2026-03-06T01:45:48.417395027Z" level=info msg="RemoveContainer for \"36f48350f108084045eccfb3d5096cef28e62dc5fa897d070cbae1217ec96455\"" Mar 6 01:45:48.425593 containerd[1453]: time="2026-03-06T01:45:48.423503064Z" level=info msg="RemoveContainer for \"36f48350f108084045eccfb3d5096cef28e62dc5fa897d070cbae1217ec96455\" returns successfully" Mar 6 01:45:48.425740 kubelet[2555]: I0306 01:45:48.425699 2555 scope.go:117] "RemoveContainer" containerID="9ec7efceb4864137ae1857e8b00d18e71f9b64c6fcf20635472889089b6de2ef" Mar 6 01:45:48.429636 containerd[1453]: time="2026-03-06T01:45:48.429491136Z" level=error msg="ContainerStatus for \"9ec7efceb4864137ae1857e8b00d18e71f9b64c6fcf20635472889089b6de2ef\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9ec7efceb4864137ae1857e8b00d18e71f9b64c6fcf20635472889089b6de2ef\": not found" Mar 6 01:45:48.438961 kubelet[2555]: E0306 01:45:48.438856 2555 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9ec7efceb4864137ae1857e8b00d18e71f9b64c6fcf20635472889089b6de2ef\": not found" containerID="9ec7efceb4864137ae1857e8b00d18e71f9b64c6fcf20635472889089b6de2ef" Mar 6 01:45:48.439074 kubelet[2555]: I0306 01:45:48.438958 2555 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9ec7efceb4864137ae1857e8b00d18e71f9b64c6fcf20635472889089b6de2ef"} err="failed to get container status \"9ec7efceb4864137ae1857e8b00d18e71f9b64c6fcf20635472889089b6de2ef\": rpc error: code = NotFound desc = an error occurred when try to find container \"9ec7efceb4864137ae1857e8b00d18e71f9b64c6fcf20635472889089b6de2ef\": not found" Mar 6 01:45:48.439074 kubelet[2555]: I0306 01:45:48.439069 2555 scope.go:117] "RemoveContainer" containerID="d097b227ae3c805f43dd66421f93fe4e59eaaa1419c41b032bce0d82c8374893" Mar 6 01:45:48.439580 containerd[1453]: time="2026-03-06T01:45:48.439518653Z" level=error msg="ContainerStatus for \"d097b227ae3c805f43dd66421f93fe4e59eaaa1419c41b032bce0d82c8374893\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d097b227ae3c805f43dd66421f93fe4e59eaaa1419c41b032bce0d82c8374893\": not found" Mar 6 01:45:48.439879 kubelet[2555]: E0306 01:45:48.439741 2555 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d097b227ae3c805f43dd66421f93fe4e59eaaa1419c41b032bce0d82c8374893\": not found" containerID="d097b227ae3c805f43dd66421f93fe4e59eaaa1419c41b032bce0d82c8374893" Mar 6 01:45:48.439879 kubelet[2555]: I0306 01:45:48.439852 2555 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d097b227ae3c805f43dd66421f93fe4e59eaaa1419c41b032bce0d82c8374893"} err="failed to get container status \"d097b227ae3c805f43dd66421f93fe4e59eaaa1419c41b032bce0d82c8374893\": rpc error: code = NotFound desc = an error occurred when try to find container \"d097b227ae3c805f43dd66421f93fe4e59eaaa1419c41b032bce0d82c8374893\": not found" Mar 6 01:45:48.439879 kubelet[2555]: I0306 01:45:48.439870 2555 scope.go:117] "RemoveContainer" containerID="683acde366dd8873de520ceeb042452bd4505e5b56081825dcc117a028b8c479" Mar 6 01:45:48.440192 containerd[1453]: time="2026-03-06T01:45:48.440146290Z" level=error msg="ContainerStatus for \"683acde366dd8873de520ceeb042452bd4505e5b56081825dcc117a028b8c479\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"683acde366dd8873de520ceeb042452bd4505e5b56081825dcc117a028b8c479\": not found" Mar 6 01:45:48.440482 kubelet[2555]: E0306 01:45:48.440370 2555 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"683acde366dd8873de520ceeb042452bd4505e5b56081825dcc117a028b8c479\": not found" containerID="683acde366dd8873de520ceeb042452bd4505e5b56081825dcc117a028b8c479" Mar 6 01:45:48.440482 kubelet[2555]: I0306 01:45:48.440402 2555 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"683acde366dd8873de520ceeb042452bd4505e5b56081825dcc117a028b8c479"} err="failed to get container status \"683acde366dd8873de520ceeb042452bd4505e5b56081825dcc117a028b8c479\": rpc error: code = NotFound desc = an error occurred when try to find container \"683acde366dd8873de520ceeb042452bd4505e5b56081825dcc117a028b8c479\": not found" Mar 6 01:45:48.440588 kubelet[2555]: I0306 01:45:48.440424 2555 scope.go:117] "RemoveContainer" containerID="9fbcbb469558369feab798c6653b6125c484c6cdfa55506b607f4fd941902675" Mar 6 01:45:48.440828 containerd[1453]: time="2026-03-06T01:45:48.440780638Z" level=error msg="ContainerStatus for \"9fbcbb469558369feab798c6653b6125c484c6cdfa55506b607f4fd941902675\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9fbcbb469558369feab798c6653b6125c484c6cdfa55506b607f4fd941902675\": not found" Mar 6 01:45:48.440944 kubelet[2555]: E0306 01:45:48.440903 2555 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9fbcbb469558369feab798c6653b6125c484c6cdfa55506b607f4fd941902675\": not found" containerID="9fbcbb469558369feab798c6653b6125c484c6cdfa55506b607f4fd941902675" Mar 6 01:45:48.440988 kubelet[2555]: I0306 01:45:48.440950 2555 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9fbcbb469558369feab798c6653b6125c484c6cdfa55506b607f4fd941902675"} err="failed to get container status \"9fbcbb469558369feab798c6653b6125c484c6cdfa55506b607f4fd941902675\": rpc error: code = NotFound desc = an error occurred when try to find container \"9fbcbb469558369feab798c6653b6125c484c6cdfa55506b607f4fd941902675\": not found" Mar 6 01:45:48.440988 kubelet[2555]: I0306 01:45:48.440967 2555 scope.go:117] "RemoveContainer" containerID="36f48350f108084045eccfb3d5096cef28e62dc5fa897d070cbae1217ec96455" Mar 6 01:45:48.441226 containerd[1453]: time="2026-03-06T01:45:48.441171107Z" level=error msg="ContainerStatus for \"36f48350f108084045eccfb3d5096cef28e62dc5fa897d070cbae1217ec96455\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"36f48350f108084045eccfb3d5096cef28e62dc5fa897d070cbae1217ec96455\": not found" Mar 6 01:45:48.441609 kubelet[2555]: E0306 01:45:48.441533 2555 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"36f48350f108084045eccfb3d5096cef28e62dc5fa897d070cbae1217ec96455\": not found" containerID="36f48350f108084045eccfb3d5096cef28e62dc5fa897d070cbae1217ec96455" Mar 6 01:45:48.441609 kubelet[2555]: I0306 01:45:48.441593 2555 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"36f48350f108084045eccfb3d5096cef28e62dc5fa897d070cbae1217ec96455"} err="failed to get container status \"36f48350f108084045eccfb3d5096cef28e62dc5fa897d070cbae1217ec96455\": rpc error: code = NotFound desc = an error occurred when try to find container \"36f48350f108084045eccfb3d5096cef28e62dc5fa897d070cbae1217ec96455\": not found" Mar 6 01:45:48.441699 kubelet[2555]: I0306 01:45:48.441610 2555 scope.go:117] "RemoveContainer" containerID="39903bbfa8c4909edc7f695c326552e9bd621f8bdd8f05ea2bbd7081514bf123" Mar 6 01:45:48.443188 containerd[1453]: time="2026-03-06T01:45:48.442999787Z" level=info msg="RemoveContainer for \"39903bbfa8c4909edc7f695c326552e9bd621f8bdd8f05ea2bbd7081514bf123\"" Mar 6 01:45:48.447710 containerd[1453]: time="2026-03-06T01:45:48.447631257Z" level=info msg="RemoveContainer for \"39903bbfa8c4909edc7f695c326552e9bd621f8bdd8f05ea2bbd7081514bf123\" returns successfully" Mar 6 01:45:48.448006 kubelet[2555]: I0306 01:45:48.447873 2555 scope.go:117] "RemoveContainer" containerID="39903bbfa8c4909edc7f695c326552e9bd621f8bdd8f05ea2bbd7081514bf123" Mar 6 01:45:48.448226 containerd[1453]: time="2026-03-06T01:45:48.448159266Z" level=error msg="ContainerStatus for \"39903bbfa8c4909edc7f695c326552e9bd621f8bdd8f05ea2bbd7081514bf123\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"39903bbfa8c4909edc7f695c326552e9bd621f8bdd8f05ea2bbd7081514bf123\": not found" Mar 6 01:45:48.448529 kubelet[2555]: E0306 01:45:48.448393 2555 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"39903bbfa8c4909edc7f695c326552e9bd621f8bdd8f05ea2bbd7081514bf123\": not found" containerID="39903bbfa8c4909edc7f695c326552e9bd621f8bdd8f05ea2bbd7081514bf123" Mar 6 01:45:48.448595 kubelet[2555]: I0306 01:45:48.448523 2555 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"39903bbfa8c4909edc7f695c326552e9bd621f8bdd8f05ea2bbd7081514bf123"} err="failed to get container status \"39903bbfa8c4909edc7f695c326552e9bd621f8bdd8f05ea2bbd7081514bf123\": rpc error: code = NotFound desc = an error occurred when try to find container \"39903bbfa8c4909edc7f695c326552e9bd621f8bdd8f05ea2bbd7081514bf123\": not found" Mar 6 01:45:48.665109 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b2b67fc2d5a5b7ef6648c8bb0fbf3732ebf1bef8d3090ce708caeb4fc7d80f9a-rootfs.mount: Deactivated successfully. Mar 6 01:45:48.665299 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-49d086317879d90eff0b627a71d96962200f2e39fe151d79f597b0cc36c15e81-rootfs.mount: Deactivated successfully. Mar 6 01:45:48.665562 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-49d086317879d90eff0b627a71d96962200f2e39fe151d79f597b0cc36c15e81-shm.mount: Deactivated successfully. Mar 6 01:45:48.665697 systemd[1]: var-lib-kubelet-pods-aa8348b8\x2d0358\x2d4c84\x2da119\x2d9e16eec798fe-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnqc6q.mount: Deactivated successfully. Mar 6 01:45:48.665822 systemd[1]: var-lib-kubelet-pods-0863c04f\x2d8c11\x2d43b1\x2da4a0\x2dc84fc353665d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcfz29.mount: Deactivated successfully. Mar 6 01:45:48.665945 systemd[1]: var-lib-kubelet-pods-0863c04f\x2d8c11\x2d43b1\x2da4a0\x2dc84fc353665d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 6 01:45:48.666063 systemd[1]: var-lib-kubelet-pods-0863c04f\x2d8c11\x2d43b1\x2da4a0\x2dc84fc353665d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 6 01:45:49.553552 sshd[4166]: pam_unix(sshd:session): session closed for user core Mar 6 01:45:49.568360 systemd[1]: sshd@23-10.0.0.105:22-10.0.0.1:47494.service: Deactivated successfully. Mar 6 01:45:49.571270 systemd[1]: session-24.scope: Deactivated successfully. Mar 6 01:45:49.574078 systemd-logind[1442]: Session 24 logged out. Waiting for processes to exit. Mar 6 01:45:49.580929 systemd[1]: Started sshd@24-10.0.0.105:22-10.0.0.1:47510.service - OpenSSH per-connection server daemon (10.0.0.1:47510). Mar 6 01:45:49.582247 systemd-logind[1442]: Removed session 24. Mar 6 01:45:49.624210 sshd[4331]: Accepted publickey for core from 10.0.0.1 port 47510 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:45:49.626582 sshd[4331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:45:49.633511 systemd-logind[1442]: New session 25 of user core. Mar 6 01:45:49.645692 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 6 01:45:49.681978 kubelet[2555]: I0306 01:45:49.681884 2555 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0863c04f-8c11-43b1-a4a0-c84fc353665d" path="/var/lib/kubelet/pods/0863c04f-8c11-43b1-a4a0-c84fc353665d/volumes" Mar 6 01:45:49.683231 kubelet[2555]: I0306 01:45:49.683152 2555 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa8348b8-0358-4c84-a119-9e16eec798fe" path="/var/lib/kubelet/pods/aa8348b8-0358-4c84-a119-9e16eec798fe/volumes" Mar 6 01:45:50.242656 sshd[4331]: pam_unix(sshd:session): session closed for user core Mar 6 01:45:50.253974 systemd[1]: sshd@24-10.0.0.105:22-10.0.0.1:47510.service: Deactivated successfully. Mar 6 01:45:50.256252 systemd[1]: session-25.scope: Deactivated successfully. Mar 6 01:45:50.260245 systemd-logind[1442]: Session 25 logged out. Waiting for processes to exit. Mar 6 01:45:50.268762 systemd[1]: Started sshd@25-10.0.0.105:22-10.0.0.1:59890.service - OpenSSH per-connection server daemon (10.0.0.1:59890). Mar 6 01:45:50.272142 systemd-logind[1442]: Removed session 25. Mar 6 01:45:50.323309 systemd[1]: Created slice kubepods-burstable-pod8e5f898c_38fd_4b75_96c1_7398214c5e2c.slice - libcontainer container kubepods-burstable-pod8e5f898c_38fd_4b75_96c1_7398214c5e2c.slice. Mar 6 01:45:50.344515 sshd[4344]: Accepted publickey for core from 10.0.0.1 port 59890 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:45:50.347024 sshd[4344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:45:50.350592 kubelet[2555]: I0306 01:45:50.349913 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8e5f898c-38fd-4b75-96c1-7398214c5e2c-xtables-lock\") pod \"cilium-nbhcw\" (UID: \"8e5f898c-38fd-4b75-96c1-7398214c5e2c\") " pod="kube-system/cilium-nbhcw" Mar 6 01:45:50.350592 kubelet[2555]: I0306 01:45:50.349964 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8e5f898c-38fd-4b75-96c1-7398214c5e2c-hubble-tls\") pod \"cilium-nbhcw\" (UID: \"8e5f898c-38fd-4b75-96c1-7398214c5e2c\") " pod="kube-system/cilium-nbhcw" Mar 6 01:45:50.350592 kubelet[2555]: I0306 01:45:50.350004 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8e5f898c-38fd-4b75-96c1-7398214c5e2c-cilium-config-path\") pod \"cilium-nbhcw\" (UID: \"8e5f898c-38fd-4b75-96c1-7398214c5e2c\") " pod="kube-system/cilium-nbhcw" Mar 6 01:45:50.350592 kubelet[2555]: I0306 01:45:50.350035 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8e5f898c-38fd-4b75-96c1-7398214c5e2c-bpf-maps\") pod \"cilium-nbhcw\" (UID: \"8e5f898c-38fd-4b75-96c1-7398214c5e2c\") " pod="kube-system/cilium-nbhcw" Mar 6 01:45:50.350592 kubelet[2555]: I0306 01:45:50.350058 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8e5f898c-38fd-4b75-96c1-7398214c5e2c-clustermesh-secrets\") pod \"cilium-nbhcw\" (UID: \"8e5f898c-38fd-4b75-96c1-7398214c5e2c\") " pod="kube-system/cilium-nbhcw" Mar 6 01:45:50.350592 kubelet[2555]: I0306 01:45:50.350078 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8e5f898c-38fd-4b75-96c1-7398214c5e2c-cilium-run\") pod \"cilium-nbhcw\" (UID: \"8e5f898c-38fd-4b75-96c1-7398214c5e2c\") " pod="kube-system/cilium-nbhcw" Mar 6 01:45:50.351002 kubelet[2555]: I0306 01:45:50.350100 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8e5f898c-38fd-4b75-96c1-7398214c5e2c-hostproc\") pod \"cilium-nbhcw\" (UID: \"8e5f898c-38fd-4b75-96c1-7398214c5e2c\") " pod="kube-system/cilium-nbhcw" Mar 6 01:45:50.351002 kubelet[2555]: I0306 01:45:50.350126 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8e5f898c-38fd-4b75-96c1-7398214c5e2c-etc-cni-netd\") pod \"cilium-nbhcw\" (UID: \"8e5f898c-38fd-4b75-96c1-7398214c5e2c\") " pod="kube-system/cilium-nbhcw" Mar 6 01:45:50.351002 kubelet[2555]: I0306 01:45:50.350149 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8e5f898c-38fd-4b75-96c1-7398214c5e2c-cilium-ipsec-secrets\") pod \"cilium-nbhcw\" (UID: \"8e5f898c-38fd-4b75-96c1-7398214c5e2c\") " pod="kube-system/cilium-nbhcw" Mar 6 01:45:50.351002 kubelet[2555]: I0306 01:45:50.350168 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8e5f898c-38fd-4b75-96c1-7398214c5e2c-host-proc-sys-net\") pod \"cilium-nbhcw\" (UID: \"8e5f898c-38fd-4b75-96c1-7398214c5e2c\") " pod="kube-system/cilium-nbhcw" Mar 6 01:45:50.351002 kubelet[2555]: I0306 01:45:50.350194 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lt9h\" (UniqueName: \"kubernetes.io/projected/8e5f898c-38fd-4b75-96c1-7398214c5e2c-kube-api-access-8lt9h\") pod \"cilium-nbhcw\" (UID: \"8e5f898c-38fd-4b75-96c1-7398214c5e2c\") " pod="kube-system/cilium-nbhcw" Mar 6 01:45:50.351002 kubelet[2555]: I0306 01:45:50.350222 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8e5f898c-38fd-4b75-96c1-7398214c5e2c-lib-modules\") pod \"cilium-nbhcw\" (UID: \"8e5f898c-38fd-4b75-96c1-7398214c5e2c\") " pod="kube-system/cilium-nbhcw" Mar 6 01:45:50.351268 kubelet[2555]: I0306 01:45:50.350246 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8e5f898c-38fd-4b75-96c1-7398214c5e2c-host-proc-sys-kernel\") pod \"cilium-nbhcw\" (UID: \"8e5f898c-38fd-4b75-96c1-7398214c5e2c\") " pod="kube-system/cilium-nbhcw" Mar 6 01:45:50.351268 kubelet[2555]: I0306 01:45:50.350267 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8e5f898c-38fd-4b75-96c1-7398214c5e2c-cilium-cgroup\") pod \"cilium-nbhcw\" (UID: \"8e5f898c-38fd-4b75-96c1-7398214c5e2c\") " pod="kube-system/cilium-nbhcw" Mar 6 01:45:50.351268 kubelet[2555]: I0306 01:45:50.350291 2555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8e5f898c-38fd-4b75-96c1-7398214c5e2c-cni-path\") pod \"cilium-nbhcw\" (UID: \"8e5f898c-38fd-4b75-96c1-7398214c5e2c\") " pod="kube-system/cilium-nbhcw" Mar 6 01:45:50.355215 systemd-logind[1442]: New session 26 of user core. Mar 6 01:45:50.360984 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 6 01:45:50.420781 sshd[4344]: pam_unix(sshd:session): session closed for user core Mar 6 01:45:50.433090 systemd[1]: sshd@25-10.0.0.105:22-10.0.0.1:59890.service: Deactivated successfully. Mar 6 01:45:50.436093 systemd[1]: session-26.scope: Deactivated successfully. Mar 6 01:45:50.439215 systemd-logind[1442]: Session 26 logged out. Waiting for processes to exit. Mar 6 01:45:50.450279 systemd[1]: Started sshd@26-10.0.0.105:22-10.0.0.1:59904.service - OpenSSH per-connection server daemon (10.0.0.1:59904). Mar 6 01:45:50.451861 systemd-logind[1442]: Removed session 26. Mar 6 01:45:50.494226 sshd[4352]: Accepted publickey for core from 10.0.0.1 port 59904 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:45:50.497085 sshd[4352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:45:50.503214 systemd-logind[1442]: New session 27 of user core. Mar 6 01:45:50.515070 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 6 01:45:50.642802 kubelet[2555]: E0306 01:45:50.641946 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:45:50.643008 containerd[1453]: time="2026-03-06T01:45:50.642824149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nbhcw,Uid:8e5f898c-38fd-4b75-96c1-7398214c5e2c,Namespace:kube-system,Attempt:0,}" Mar 6 01:45:50.743921 containerd[1453]: time="2026-03-06T01:45:50.741953652Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:45:50.743921 containerd[1453]: time="2026-03-06T01:45:50.742077765Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:45:50.743921 containerd[1453]: time="2026-03-06T01:45:50.742123792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:45:50.743921 containerd[1453]: time="2026-03-06T01:45:50.742276678Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:45:50.789748 systemd[1]: Started cri-containerd-454162690509fa76cefe81031b6bb0fa45813b901104012d39bd470aa4288020.scope - libcontainer container 454162690509fa76cefe81031b6bb0fa45813b901104012d39bd470aa4288020. Mar 6 01:45:50.820270 kubelet[2555]: E0306 01:45:50.819923 2555 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 6 01:45:50.861215 containerd[1453]: time="2026-03-06T01:45:50.861170743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nbhcw,Uid:8e5f898c-38fd-4b75-96c1-7398214c5e2c,Namespace:kube-system,Attempt:0,} returns sandbox id \"454162690509fa76cefe81031b6bb0fa45813b901104012d39bd470aa4288020\"" Mar 6 01:45:50.868319 kubelet[2555]: E0306 01:45:50.867406 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:45:50.877514 containerd[1453]: time="2026-03-06T01:45:50.877299415Z" level=info msg="CreateContainer within sandbox \"454162690509fa76cefe81031b6bb0fa45813b901104012d39bd470aa4288020\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 6 01:45:50.934110 containerd[1453]: time="2026-03-06T01:45:50.933594343Z" level=info msg="CreateContainer within sandbox \"454162690509fa76cefe81031b6bb0fa45813b901104012d39bd470aa4288020\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2c2fa0f27c85406428e6fcd527898047f63dca31f9ba47f75daae54811a8e1c5\"" Mar 6 01:45:50.937647 containerd[1453]: time="2026-03-06T01:45:50.934749751Z" level=info msg="StartContainer for \"2c2fa0f27c85406428e6fcd527898047f63dca31f9ba47f75daae54811a8e1c5\"" Mar 6 01:45:51.032979 systemd[1]: Started cri-containerd-2c2fa0f27c85406428e6fcd527898047f63dca31f9ba47f75daae54811a8e1c5.scope - libcontainer container 2c2fa0f27c85406428e6fcd527898047f63dca31f9ba47f75daae54811a8e1c5. Mar 6 01:45:51.136517 containerd[1453]: time="2026-03-06T01:45:51.133842087Z" level=info msg="StartContainer for \"2c2fa0f27c85406428e6fcd527898047f63dca31f9ba47f75daae54811a8e1c5\" returns successfully" Mar 6 01:45:51.212561 systemd[1]: cri-containerd-2c2fa0f27c85406428e6fcd527898047f63dca31f9ba47f75daae54811a8e1c5.scope: Deactivated successfully. Mar 6 01:45:51.322175 containerd[1453]: time="2026-03-06T01:45:51.321605514Z" level=info msg="shim disconnected" id=2c2fa0f27c85406428e6fcd527898047f63dca31f9ba47f75daae54811a8e1c5 namespace=k8s.io Mar 6 01:45:51.322175 containerd[1453]: time="2026-03-06T01:45:51.321838281Z" level=warning msg="cleaning up after shim disconnected" id=2c2fa0f27c85406428e6fcd527898047f63dca31f9ba47f75daae54811a8e1c5 namespace=k8s.io Mar 6 01:45:51.322175 containerd[1453]: time="2026-03-06T01:45:51.321851997Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 6 01:45:51.363821 kubelet[2555]: E0306 01:45:51.363775 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:45:51.384996 containerd[1453]: time="2026-03-06T01:45:51.384848219Z" level=info msg="CreateContainer within sandbox \"454162690509fa76cefe81031b6bb0fa45813b901104012d39bd470aa4288020\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 6 01:45:51.437767 containerd[1453]: time="2026-03-06T01:45:51.435746698Z" level=info msg="CreateContainer within sandbox \"454162690509fa76cefe81031b6bb0fa45813b901104012d39bd470aa4288020\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8c6a1e46de30b573231f5ff6e2da41fad5e658cf367f5710c85a009652aada9e\"" Mar 6 01:45:51.437767 containerd[1453]: time="2026-03-06T01:45:51.437399630Z" level=info msg="StartContainer for \"8c6a1e46de30b573231f5ff6e2da41fad5e658cf367f5710c85a009652aada9e\"" Mar 6 01:45:51.541499 systemd[1]: Started cri-containerd-8c6a1e46de30b573231f5ff6e2da41fad5e658cf367f5710c85a009652aada9e.scope - libcontainer container 8c6a1e46de30b573231f5ff6e2da41fad5e658cf367f5710c85a009652aada9e. Mar 6 01:45:51.640331 containerd[1453]: time="2026-03-06T01:45:51.637679275Z" level=info msg="StartContainer for \"8c6a1e46de30b573231f5ff6e2da41fad5e658cf367f5710c85a009652aada9e\" returns successfully" Mar 6 01:45:51.649924 systemd[1]: cri-containerd-8c6a1e46de30b573231f5ff6e2da41fad5e658cf367f5710c85a009652aada9e.scope: Deactivated successfully. Mar 6 01:45:51.717318 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c6a1e46de30b573231f5ff6e2da41fad5e658cf367f5710c85a009652aada9e-rootfs.mount: Deactivated successfully. Mar 6 01:45:51.748280 containerd[1453]: time="2026-03-06T01:45:51.748193193Z" level=info msg="shim disconnected" id=8c6a1e46de30b573231f5ff6e2da41fad5e658cf367f5710c85a009652aada9e namespace=k8s.io Mar 6 01:45:51.748280 containerd[1453]: time="2026-03-06T01:45:51.748257173Z" level=warning msg="cleaning up after shim disconnected" id=8c6a1e46de30b573231f5ff6e2da41fad5e658cf367f5710c85a009652aada9e namespace=k8s.io Mar 6 01:45:51.748280 containerd[1453]: time="2026-03-06T01:45:51.748268885Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 6 01:45:52.391657 kubelet[2555]: E0306 01:45:52.390006 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:45:52.426836 containerd[1453]: time="2026-03-06T01:45:52.425758610Z" level=info msg="CreateContainer within sandbox \"454162690509fa76cefe81031b6bb0fa45813b901104012d39bd470aa4288020\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 6 01:45:52.510240 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount788778617.mount: Deactivated successfully. Mar 6 01:45:52.606696 containerd[1453]: time="2026-03-06T01:45:52.602733748Z" level=info msg="CreateContainer within sandbox \"454162690509fa76cefe81031b6bb0fa45813b901104012d39bd470aa4288020\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4231c71c183c385a526e061560fd9acb96e226ca1ee3021edaa7c953b152263c\"" Mar 6 01:45:52.606696 containerd[1453]: time="2026-03-06T01:45:52.603743263Z" level=info msg="StartContainer for \"4231c71c183c385a526e061560fd9acb96e226ca1ee3021edaa7c953b152263c\"" Mar 6 01:45:52.712163 systemd[1]: run-containerd-runc-k8s.io-4231c71c183c385a526e061560fd9acb96e226ca1ee3021edaa7c953b152263c-runc.w3B9od.mount: Deactivated successfully. Mar 6 01:45:52.734692 systemd[1]: Started cri-containerd-4231c71c183c385a526e061560fd9acb96e226ca1ee3021edaa7c953b152263c.scope - libcontainer container 4231c71c183c385a526e061560fd9acb96e226ca1ee3021edaa7c953b152263c. Mar 6 01:45:52.908127 systemd[1]: cri-containerd-4231c71c183c385a526e061560fd9acb96e226ca1ee3021edaa7c953b152263c.scope: Deactivated successfully. Mar 6 01:45:52.910855 containerd[1453]: time="2026-03-06T01:45:52.908325333Z" level=info msg="StartContainer for \"4231c71c183c385a526e061560fd9acb96e226ca1ee3021edaa7c953b152263c\" returns successfully" Mar 6 01:45:53.052917 containerd[1453]: time="2026-03-06T01:45:53.051721413Z" level=info msg="shim disconnected" id=4231c71c183c385a526e061560fd9acb96e226ca1ee3021edaa7c953b152263c namespace=k8s.io Mar 6 01:45:53.052917 containerd[1453]: time="2026-03-06T01:45:53.051777730Z" level=warning msg="cleaning up after shim disconnected" id=4231c71c183c385a526e061560fd9acb96e226ca1ee3021edaa7c953b152263c namespace=k8s.io Mar 6 01:45:53.052917 containerd[1453]: time="2026-03-06T01:45:53.051790173Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 6 01:45:53.409853 kubelet[2555]: E0306 01:45:53.409110 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:45:53.466968 containerd[1453]: time="2026-03-06T01:45:53.465355252Z" level=info msg="CreateContainer within sandbox \"454162690509fa76cefe81031b6bb0fa45813b901104012d39bd470aa4288020\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 6 01:45:53.516777 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4231c71c183c385a526e061560fd9acb96e226ca1ee3021edaa7c953b152263c-rootfs.mount: Deactivated successfully. Mar 6 01:45:53.525586 containerd[1453]: time="2026-03-06T01:45:53.524747967Z" level=info msg="CreateContainer within sandbox \"454162690509fa76cefe81031b6bb0fa45813b901104012d39bd470aa4288020\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"162ff1981153bb6c55e082bb29b7532c0cb30985b0f7dfa8b612acd8525a0bf4\"" Mar 6 01:45:53.525326 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3146886452.mount: Deactivated successfully. Mar 6 01:45:53.531573 containerd[1453]: time="2026-03-06T01:45:53.526634551Z" level=info msg="StartContainer for \"162ff1981153bb6c55e082bb29b7532c0cb30985b0f7dfa8b612acd8525a0bf4\"" Mar 6 01:45:53.627204 systemd[1]: run-containerd-runc-k8s.io-162ff1981153bb6c55e082bb29b7532c0cb30985b0f7dfa8b612acd8525a0bf4-runc.kC3FIC.mount: Deactivated successfully. Mar 6 01:45:53.654359 systemd[1]: Started cri-containerd-162ff1981153bb6c55e082bb29b7532c0cb30985b0f7dfa8b612acd8525a0bf4.scope - libcontainer container 162ff1981153bb6c55e082bb29b7532c0cb30985b0f7dfa8b612acd8525a0bf4. Mar 6 01:45:53.683953 kubelet[2555]: E0306 01:45:53.682801 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:45:53.748764 systemd[1]: cri-containerd-162ff1981153bb6c55e082bb29b7532c0cb30985b0f7dfa8b612acd8525a0bf4.scope: Deactivated successfully. Mar 6 01:45:53.761979 containerd[1453]: time="2026-03-06T01:45:53.760933317Z" level=info msg="StartContainer for \"162ff1981153bb6c55e082bb29b7532c0cb30985b0f7dfa8b612acd8525a0bf4\" returns successfully" Mar 6 01:45:53.883820 containerd[1453]: time="2026-03-06T01:45:53.882354802Z" level=info msg="shim disconnected" id=162ff1981153bb6c55e082bb29b7532c0cb30985b0f7dfa8b612acd8525a0bf4 namespace=k8s.io Mar 6 01:45:53.883820 containerd[1453]: time="2026-03-06T01:45:53.882639907Z" level=warning msg="cleaning up after shim disconnected" id=162ff1981153bb6c55e082bb29b7532c0cb30985b0f7dfa8b612acd8525a0bf4 namespace=k8s.io Mar 6 01:45:53.883820 containerd[1453]: time="2026-03-06T01:45:53.882654695Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 6 01:45:54.431222 kubelet[2555]: E0306 01:45:54.430056 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:45:54.457180 containerd[1453]: time="2026-03-06T01:45:54.457012855Z" level=info msg="CreateContainer within sandbox \"454162690509fa76cefe81031b6bb0fa45813b901104012d39bd470aa4288020\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 6 01:45:54.512253 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-162ff1981153bb6c55e082bb29b7532c0cb30985b0f7dfa8b612acd8525a0bf4-rootfs.mount: Deactivated successfully. Mar 6 01:45:54.592946 containerd[1453]: time="2026-03-06T01:45:54.591983103Z" level=info msg="CreateContainer within sandbox \"454162690509fa76cefe81031b6bb0fa45813b901104012d39bd470aa4288020\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a12396bb4b53ad6d7f001e4efcd70ceafde932fd5ffcda597eb98449f40ae19f\"" Mar 6 01:45:54.602827 containerd[1453]: time="2026-03-06T01:45:54.595359886Z" level=info msg="StartContainer for \"a12396bb4b53ad6d7f001e4efcd70ceafde932fd5ffcda597eb98449f40ae19f\"" Mar 6 01:45:54.752019 systemd[1]: Started cri-containerd-a12396bb4b53ad6d7f001e4efcd70ceafde932fd5ffcda597eb98449f40ae19f.scope - libcontainer container a12396bb4b53ad6d7f001e4efcd70ceafde932fd5ffcda597eb98449f40ae19f. Mar 6 01:45:54.903877 containerd[1453]: time="2026-03-06T01:45:54.903685505Z" level=info msg="StartContainer for \"a12396bb4b53ad6d7f001e4efcd70ceafde932fd5ffcda597eb98449f40ae19f\" returns successfully" Mar 6 01:45:55.455174 kubelet[2555]: E0306 01:45:55.452606 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:45:56.353106 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 6 01:45:56.634283 kubelet[2555]: E0306 01:45:56.634076 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:45:57.475256 systemd[1]: run-containerd-runc-k8s.io-a12396bb4b53ad6d7f001e4efcd70ceafde932fd5ffcda597eb98449f40ae19f-runc.Za0HNG.mount: Deactivated successfully. Mar 6 01:45:58.679146 kubelet[2555]: E0306 01:45:58.678400 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:46:04.088299 systemd-networkd[1383]: lxc_health: Link UP Mar 6 01:46:04.099942 systemd-networkd[1383]: lxc_health: Gained carrier Mar 6 01:46:04.644931 kubelet[2555]: E0306 01:46:04.643412 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:46:04.743856 kubelet[2555]: I0306 01:46:04.743149 2555 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-nbhcw" podStartSLOduration=14.743131317 podStartE2EDuration="14.743131317s" podCreationTimestamp="2026-03-06 01:45:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 01:45:55.526058395 +0000 UTC m=+90.029810668" watchObservedRunningTime="2026-03-06 01:46:04.743131317 +0000 UTC m=+99.246883561" Mar 6 01:46:05.552976 kubelet[2555]: E0306 01:46:05.552816 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:46:05.875291 systemd-networkd[1383]: lxc_health: Gained IPv6LL Mar 6 01:46:06.560725 kubelet[2555]: E0306 01:46:06.560044 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:46:07.669330 systemd[1]: run-containerd-runc-k8s.io-a12396bb4b53ad6d7f001e4efcd70ceafde932fd5ffcda597eb98449f40ae19f-runc.T3Xt1E.mount: Deactivated successfully. Mar 6 01:46:10.693221 systemd[1]: run-containerd-runc-k8s.io-a12396bb4b53ad6d7f001e4efcd70ceafde932fd5ffcda597eb98449f40ae19f-runc.9aLQvQ.mount: Deactivated successfully. Mar 6 01:46:11.114354 kubelet[2555]: E0306 01:46:11.114033 2555 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:46:11.797863 sshd[4352]: pam_unix(sshd:session): session closed for user core Mar 6 01:46:11.809007 systemd-logind[1442]: Session 27 logged out. Waiting for processes to exit. Mar 6 01:46:11.811157 systemd[1]: sshd@26-10.0.0.105:22-10.0.0.1:59904.service: Deactivated successfully. Mar 6 01:46:11.819855 systemd[1]: session-27.scope: Deactivated successfully. Mar 6 01:46:11.820238 systemd[1]: session-27.scope: Consumed 1.106s CPU time. Mar 6 01:46:11.825152 systemd-logind[1442]: Removed session 27.