Mar 6 01:48:53.092335 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Mar 5 23:31:42 -00 2026 Mar 6 01:48:53.092356 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a6bcd99e714cc2f1b95dc0d61d9d762252de26a434f12074c16f59200c97ba9c Mar 6 01:48:53.092367 kernel: BIOS-provided physical RAM map: Mar 6 01:48:53.092373 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 6 01:48:53.092378 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 6 01:48:53.092383 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 6 01:48:53.092389 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Mar 6 01:48:53.092395 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 6 01:48:53.092400 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Mar 6 01:48:53.092406 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Mar 6 01:48:53.092413 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Mar 6 01:48:53.092419 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Mar 6 01:48:53.092424 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Mar 6 01:48:53.092430 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Mar 6 01:48:53.092437 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Mar 6 01:48:53.092442 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 6 01:48:53.092451 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Mar 6 01:48:53.092456 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Mar 6 01:48:53.092462 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 6 01:48:53.092468 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 6 01:48:53.092473 kernel: NX (Execute Disable) protection: active Mar 6 01:48:53.092479 kernel: APIC: Static calls initialized Mar 6 01:48:53.092485 kernel: efi: EFI v2.7 by EDK II Mar 6 01:48:53.092491 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Mar 6 01:48:53.092496 kernel: SMBIOS 2.8 present. Mar 6 01:48:53.092502 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Mar 6 01:48:53.092508 kernel: Hypervisor detected: KVM Mar 6 01:48:53.092516 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 6 01:48:53.092521 kernel: kvm-clock: using sched offset of 26181480997 cycles Mar 6 01:48:53.092527 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 6 01:48:53.092534 kernel: tsc: Detected 2445.426 MHz processor Mar 6 01:48:53.092540 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 6 01:48:53.092546 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 6 01:48:53.092552 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Mar 6 01:48:53.092558 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 6 01:48:53.092564 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 6 01:48:53.092572 kernel: Using GB pages for direct mapping Mar 6 01:48:53.092578 kernel: Secure boot disabled Mar 6 01:48:53.092584 kernel: ACPI: Early table checksum verification disabled Mar 6 01:48:53.092626 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Mar 6 01:48:53.092637 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Mar 6 01:48:53.092643 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 01:48:53.092650 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 01:48:53.092658 kernel: ACPI: FACS 0x000000009CBDD000 000040 Mar 6 01:48:53.092664 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 01:48:53.092671 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 01:48:53.092677 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 01:48:53.092683 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 01:48:53.092689 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Mar 6 01:48:53.092696 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Mar 6 01:48:53.092704 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Mar 6 01:48:53.092710 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Mar 6 01:48:53.092716 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Mar 6 01:48:53.092722 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Mar 6 01:48:53.092728 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Mar 6 01:48:53.092735 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Mar 6 01:48:53.092741 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Mar 6 01:48:53.092747 kernel: No NUMA configuration found Mar 6 01:48:53.092753 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Mar 6 01:48:53.092761 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Mar 6 01:48:53.092768 kernel: Zone ranges: Mar 6 01:48:53.092774 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 6 01:48:53.092780 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Mar 6 01:48:53.092786 kernel: Normal empty Mar 6 01:48:53.092792 kernel: Movable zone start for each node Mar 6 01:48:53.092798 kernel: Early memory node ranges Mar 6 01:48:53.092804 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 6 01:48:53.092810 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Mar 6 01:48:53.092817 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Mar 6 01:48:53.092825 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Mar 6 01:48:53.092831 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Mar 6 01:48:53.092837 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Mar 6 01:48:53.092844 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Mar 6 01:48:53.092850 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 6 01:48:53.092856 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 6 01:48:53.092862 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Mar 6 01:48:53.092868 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 6 01:48:53.092874 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Mar 6 01:48:53.092883 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Mar 6 01:48:53.092889 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Mar 6 01:48:53.092895 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 6 01:48:53.092901 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 6 01:48:53.092908 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 6 01:48:53.092914 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 6 01:48:53.092920 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 6 01:48:53.092926 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 6 01:48:53.092932 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 6 01:48:53.092940 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 6 01:48:53.092947 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 6 01:48:53.092953 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 6 01:48:53.092959 kernel: TSC deadline timer available Mar 6 01:48:53.092965 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 6 01:48:53.092971 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 6 01:48:53.092977 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 6 01:48:53.092983 kernel: kvm-guest: setup PV sched yield Mar 6 01:48:53.092990 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Mar 6 01:48:53.092998 kernel: Booting paravirtualized kernel on KVM Mar 6 01:48:53.093004 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 6 01:48:53.093010 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 6 01:48:53.093017 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 6 01:48:53.093023 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 6 01:48:53.093029 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 6 01:48:53.093035 kernel: kvm-guest: PV spinlocks enabled Mar 6 01:48:53.093041 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 6 01:48:53.093048 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a6bcd99e714cc2f1b95dc0d61d9d762252de26a434f12074c16f59200c97ba9c Mar 6 01:48:53.093057 kernel: random: crng init done Mar 6 01:48:53.093063 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 6 01:48:53.093069 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 6 01:48:53.093075 kernel: Fallback order for Node 0: 0 Mar 6 01:48:53.093081 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Mar 6 01:48:53.093088 kernel: Policy zone: DMA32 Mar 6 01:48:53.093094 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 6 01:48:53.093101 kernel: Memory: 2400616K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 166124K reserved, 0K cma-reserved) Mar 6 01:48:53.093109 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 6 01:48:53.093116 kernel: ftrace: allocating 37996 entries in 149 pages Mar 6 01:48:53.093122 kernel: ftrace: allocated 149 pages with 4 groups Mar 6 01:48:53.093128 kernel: Dynamic Preempt: voluntary Mar 6 01:48:53.093134 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 6 01:48:53.093149 kernel: rcu: RCU event tracing is enabled. Mar 6 01:48:53.093158 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 6 01:48:53.093164 kernel: Trampoline variant of Tasks RCU enabled. Mar 6 01:48:53.093171 kernel: Rude variant of Tasks RCU enabled. Mar 6 01:48:53.093177 kernel: Tracing variant of Tasks RCU enabled. Mar 6 01:48:53.093184 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 6 01:48:53.093190 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 6 01:48:53.093199 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 6 01:48:53.093205 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 6 01:48:53.093252 kernel: Console: colour dummy device 80x25 Mar 6 01:48:53.093260 kernel: printk: console [ttyS0] enabled Mar 6 01:48:53.093266 kernel: ACPI: Core revision 20230628 Mar 6 01:48:53.093276 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 6 01:48:53.093283 kernel: APIC: Switch to symmetric I/O mode setup Mar 6 01:48:53.093289 kernel: x2apic enabled Mar 6 01:48:53.093296 kernel: APIC: Switched APIC routing to: physical x2apic Mar 6 01:48:53.093302 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 6 01:48:53.093309 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 6 01:48:53.093315 kernel: kvm-guest: setup PV IPIs Mar 6 01:48:53.093322 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 6 01:48:53.093328 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 6 01:48:53.093337 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 6 01:48:53.093344 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 6 01:48:53.093350 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 6 01:48:53.093356 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 6 01:48:53.093363 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 6 01:48:53.093369 kernel: Spectre V2 : Mitigation: Retpolines Mar 6 01:48:53.093379 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 6 01:48:53.093385 kernel: Speculative Store Bypass: Vulnerable Mar 6 01:48:53.093392 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 6 01:48:53.093401 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 6 01:48:53.093408 kernel: active return thunk: srso_alias_return_thunk Mar 6 01:48:53.093414 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 6 01:48:53.093421 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 6 01:48:53.093428 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 6 01:48:53.093434 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 6 01:48:53.093441 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 6 01:48:53.093447 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 6 01:48:53.093454 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 6 01:48:53.093463 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 6 01:48:53.093469 kernel: Freeing SMP alternatives memory: 32K Mar 6 01:48:53.093475 kernel: pid_max: default: 32768 minimum: 301 Mar 6 01:48:53.093482 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 6 01:48:53.093488 kernel: landlock: Up and running. Mar 6 01:48:53.093495 kernel: SELinux: Initializing. Mar 6 01:48:53.093501 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 6 01:48:53.093508 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 6 01:48:53.093514 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 6 01:48:53.093523 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 6 01:48:53.093530 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 6 01:48:53.093536 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 6 01:48:53.093543 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 6 01:48:53.093549 kernel: signal: max sigframe size: 1776 Mar 6 01:48:53.093556 kernel: rcu: Hierarchical SRCU implementation. Mar 6 01:48:53.093562 kernel: rcu: Max phase no-delay instances is 400. Mar 6 01:48:53.093569 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 6 01:48:53.093578 kernel: smp: Bringing up secondary CPUs ... Mar 6 01:48:53.093584 kernel: smpboot: x86: Booting SMP configuration: Mar 6 01:48:53.093619 kernel: .... node #0, CPUs: #1 #2 #3 Mar 6 01:48:53.093626 kernel: smp: Brought up 1 node, 4 CPUs Mar 6 01:48:53.093632 kernel: smpboot: Max logical packages: 1 Mar 6 01:48:53.093639 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 6 01:48:53.093645 kernel: devtmpfs: initialized Mar 6 01:48:53.093652 kernel: x86/mm: Memory block size: 128MB Mar 6 01:48:53.093658 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Mar 6 01:48:53.093665 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Mar 6 01:48:53.093674 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Mar 6 01:48:53.093681 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Mar 6 01:48:53.093687 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Mar 6 01:48:53.093694 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 6 01:48:53.093700 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 6 01:48:53.093707 kernel: pinctrl core: initialized pinctrl subsystem Mar 6 01:48:53.093713 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 6 01:48:53.093720 kernel: audit: initializing netlink subsys (disabled) Mar 6 01:48:53.093729 kernel: audit: type=2000 audit(1772761732.244:1): state=initialized audit_enabled=0 res=1 Mar 6 01:48:53.093735 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 6 01:48:53.093742 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 6 01:48:53.093748 kernel: cpuidle: using governor menu Mar 6 01:48:53.093755 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 6 01:48:53.093761 kernel: dca service started, version 1.12.1 Mar 6 01:48:53.093768 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 6 01:48:53.093774 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 6 01:48:53.093781 kernel: PCI: Using configuration type 1 for base access Mar 6 01:48:53.093789 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 6 01:48:53.093796 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 6 01:48:53.093802 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 6 01:48:53.093809 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 6 01:48:53.093815 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 6 01:48:53.093822 kernel: ACPI: Added _OSI(Module Device) Mar 6 01:48:53.093828 kernel: ACPI: Added _OSI(Processor Device) Mar 6 01:48:53.093835 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 6 01:48:53.093841 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 6 01:48:53.093850 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 6 01:48:53.093856 kernel: ACPI: Interpreter enabled Mar 6 01:48:53.093863 kernel: ACPI: PM: (supports S0 S3 S5) Mar 6 01:48:53.093869 kernel: ACPI: Using IOAPIC for interrupt routing Mar 6 01:48:53.093875 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 6 01:48:53.093882 kernel: PCI: Using E820 reservations for host bridge windows Mar 6 01:48:53.093888 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 6 01:48:53.093895 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 6 01:48:53.094159 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 6 01:48:53.094457 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 6 01:48:53.094624 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 6 01:48:53.094635 kernel: PCI host bridge to bus 0000:00 Mar 6 01:48:53.094787 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 6 01:48:53.094900 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 6 01:48:53.095011 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 6 01:48:53.095127 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 6 01:48:53.095290 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 6 01:48:53.095406 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Mar 6 01:48:53.095515 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 6 01:48:53.095760 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 6 01:48:53.095896 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 6 01:48:53.096017 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Mar 6 01:48:53.096143 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Mar 6 01:48:53.096328 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Mar 6 01:48:53.096450 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Mar 6 01:48:53.096569 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 6 01:48:53.096741 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 6 01:48:53.096863 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Mar 6 01:48:53.096989 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Mar 6 01:48:53.097109 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Mar 6 01:48:53.097310 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 6 01:48:53.097436 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Mar 6 01:48:53.097556 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Mar 6 01:48:53.097715 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Mar 6 01:48:53.097844 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 6 01:48:53.097971 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Mar 6 01:48:53.098092 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Mar 6 01:48:53.098260 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Mar 6 01:48:53.098389 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Mar 6 01:48:53.098518 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 6 01:48:53.098684 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 6 01:48:53.098838 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 6 01:48:53.098967 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Mar 6 01:48:53.099086 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Mar 6 01:48:53.099302 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 6 01:48:53.099427 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Mar 6 01:48:53.099437 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 6 01:48:53.099443 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 6 01:48:53.099450 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 6 01:48:53.099460 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 6 01:48:53.099467 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 6 01:48:53.099473 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 6 01:48:53.099480 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 6 01:48:53.099486 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 6 01:48:53.099492 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 6 01:48:53.099499 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 6 01:48:53.099505 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 6 01:48:53.099511 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 6 01:48:53.099521 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 6 01:48:53.099527 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 6 01:48:53.099534 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 6 01:48:53.099540 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 6 01:48:53.099547 kernel: iommu: Default domain type: Translated Mar 6 01:48:53.099553 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 6 01:48:53.099560 kernel: efivars: Registered efivars operations Mar 6 01:48:53.099566 kernel: PCI: Using ACPI for IRQ routing Mar 6 01:48:53.099572 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 6 01:48:53.099579 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Mar 6 01:48:53.099624 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Mar 6 01:48:53.099631 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Mar 6 01:48:53.099637 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Mar 6 01:48:53.099762 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 6 01:48:53.099881 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 6 01:48:53.099999 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 6 01:48:53.100008 kernel: vgaarb: loaded Mar 6 01:48:53.100015 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 6 01:48:53.100025 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 6 01:48:53.100031 kernel: clocksource: Switched to clocksource kvm-clock Mar 6 01:48:53.100038 kernel: VFS: Disk quotas dquot_6.6.0 Mar 6 01:48:53.100045 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 6 01:48:53.100051 kernel: pnp: PnP ACPI init Mar 6 01:48:53.100287 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 6 01:48:53.100300 kernel: pnp: PnP ACPI: found 6 devices Mar 6 01:48:53.100307 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 6 01:48:53.100317 kernel: NET: Registered PF_INET protocol family Mar 6 01:48:53.100324 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 6 01:48:53.100330 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 6 01:48:53.100337 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 6 01:48:53.100343 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 6 01:48:53.100350 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 6 01:48:53.100356 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 6 01:48:53.100363 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 6 01:48:53.100369 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 6 01:48:53.100378 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 6 01:48:53.100385 kernel: NET: Registered PF_XDP protocol family Mar 6 01:48:53.100508 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Mar 6 01:48:53.100670 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Mar 6 01:48:53.100784 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 6 01:48:53.100894 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 6 01:48:53.101032 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 6 01:48:53.101144 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 6 01:48:53.101310 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 6 01:48:53.101422 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Mar 6 01:48:53.101432 kernel: PCI: CLS 0 bytes, default 64 Mar 6 01:48:53.101439 kernel: Initialise system trusted keyrings Mar 6 01:48:53.101445 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 6 01:48:53.101452 kernel: Key type asymmetric registered Mar 6 01:48:53.101458 kernel: Asymmetric key parser 'x509' registered Mar 6 01:48:53.101465 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 6 01:48:53.101471 kernel: io scheduler mq-deadline registered Mar 6 01:48:53.101482 kernel: io scheduler kyber registered Mar 6 01:48:53.101488 kernel: io scheduler bfq registered Mar 6 01:48:53.101495 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 6 01:48:53.101501 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 6 01:48:53.101508 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 6 01:48:53.101515 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 6 01:48:53.101521 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 6 01:48:53.101528 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 6 01:48:53.101534 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 6 01:48:53.101543 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 6 01:48:53.101550 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 6 01:48:53.101556 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 6 01:48:53.101721 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 6 01:48:53.101837 kernel: rtc_cmos 00:04: registered as rtc0 Mar 6 01:48:53.101949 kernel: rtc_cmos 00:04: setting system clock to 2026-03-06T01:48:52 UTC (1772761732) Mar 6 01:48:53.102062 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 6 01:48:53.102071 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 6 01:48:53.102081 kernel: efifb: probing for efifb Mar 6 01:48:53.102088 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Mar 6 01:48:53.102094 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Mar 6 01:48:53.102101 kernel: efifb: scrolling: redraw Mar 6 01:48:53.102108 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Mar 6 01:48:53.102114 kernel: Console: switching to colour frame buffer device 100x37 Mar 6 01:48:53.102121 kernel: fb0: EFI VGA frame buffer device Mar 6 01:48:53.102127 kernel: pstore: Using crash dump compression: deflate Mar 6 01:48:53.102134 kernel: pstore: Registered efi_pstore as persistent store backend Mar 6 01:48:53.102143 kernel: NET: Registered PF_INET6 protocol family Mar 6 01:48:53.102149 kernel: Segment Routing with IPv6 Mar 6 01:48:53.102156 kernel: In-situ OAM (IOAM) with IPv6 Mar 6 01:48:53.102162 kernel: NET: Registered PF_PACKET protocol family Mar 6 01:48:53.102169 kernel: Key type dns_resolver registered Mar 6 01:48:53.102175 kernel: IPI shorthand broadcast: enabled Mar 6 01:48:53.102201 kernel: sched_clock: Marking stable (1316016553, 335770777)->(1804834632, -153047302) Mar 6 01:48:53.102256 kernel: registered taskstats version 1 Mar 6 01:48:53.102264 kernel: Loading compiled-in X.509 certificates Mar 6 01:48:53.102274 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 6d88f6264570591a57b3c9c1e1c99fca6c68b8ca' Mar 6 01:48:53.102281 kernel: Key type .fscrypt registered Mar 6 01:48:53.102288 kernel: Key type fscrypt-provisioning registered Mar 6 01:48:53.102295 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 6 01:48:53.102305 kernel: ima: Allocated hash algorithm: sha1 Mar 6 01:48:53.102311 kernel: ima: No architecture policies found Mar 6 01:48:53.102318 kernel: clk: Disabling unused clocks Mar 6 01:48:53.102325 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 6 01:48:53.102334 kernel: Write protecting the kernel read-only data: 36864k Mar 6 01:48:53.102341 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 6 01:48:53.102348 kernel: Run /init as init process Mar 6 01:48:53.102355 kernel: with arguments: Mar 6 01:48:53.102361 kernel: /init Mar 6 01:48:53.102368 kernel: with environment: Mar 6 01:48:53.102375 kernel: HOME=/ Mar 6 01:48:53.102381 kernel: TERM=linux Mar 6 01:48:53.102390 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 6 01:48:53.102401 systemd[1]: Detected virtualization kvm. Mar 6 01:48:53.102409 systemd[1]: Detected architecture x86-64. Mar 6 01:48:53.102416 systemd[1]: Running in initrd. Mar 6 01:48:53.102423 systemd[1]: No hostname configured, using default hostname. Mar 6 01:48:53.102430 systemd[1]: Hostname set to . Mar 6 01:48:53.102437 systemd[1]: Initializing machine ID from VM UUID. Mar 6 01:48:53.102444 systemd[1]: Queued start job for default target initrd.target. Mar 6 01:48:53.102454 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 6 01:48:53.102461 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 6 01:48:53.102469 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 6 01:48:53.102476 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 6 01:48:53.102484 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 6 01:48:53.102496 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 6 01:48:53.102505 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 6 01:48:53.102512 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 6 01:48:53.102520 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 6 01:48:53.102527 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 6 01:48:53.102535 systemd[1]: Reached target paths.target - Path Units. Mar 6 01:48:53.102542 systemd[1]: Reached target slices.target - Slice Units. Mar 6 01:48:53.102552 systemd[1]: Reached target swap.target - Swaps. Mar 6 01:48:53.102559 systemd[1]: Reached target timers.target - Timer Units. Mar 6 01:48:53.102566 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 6 01:48:53.102573 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 6 01:48:53.102581 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 6 01:48:53.102625 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 6 01:48:53.102632 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 6 01:48:53.102640 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 6 01:48:53.102647 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 6 01:48:53.102658 systemd[1]: Reached target sockets.target - Socket Units. Mar 6 01:48:53.102665 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 6 01:48:53.102672 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 6 01:48:53.102680 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 6 01:48:53.102687 systemd[1]: Starting systemd-fsck-usr.service... Mar 6 01:48:53.102694 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 6 01:48:53.102701 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 6 01:48:53.102709 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 01:48:53.102738 systemd-journald[194]: Collecting audit messages is disabled. Mar 6 01:48:53.102754 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 6 01:48:53.102762 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 6 01:48:53.102769 systemd[1]: Finished systemd-fsck-usr.service. Mar 6 01:48:53.102780 systemd-journald[194]: Journal started Mar 6 01:48:53.102795 systemd-journald[194]: Runtime Journal (/run/log/journal/2099ae4a55164ba9896c233ed7f9d40d) is 6.0M, max 48.3M, 42.2M free. Mar 6 01:48:53.105428 systemd[1]: Started systemd-journald.service - Journal Service. Mar 6 01:48:53.108494 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 6 01:48:53.110433 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 6 01:48:53.122377 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 6 01:48:53.124461 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 6 01:48:53.145698 systemd-modules-load[195]: Inserted module 'overlay' Mar 6 01:48:53.154421 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 6 01:48:53.158318 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 6 01:48:53.173490 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 01:48:53.191288 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 6 01:48:53.195871 systemd-modules-load[195]: Inserted module 'br_netfilter' Mar 6 01:48:53.198533 kernel: Bridge firewalling registered Mar 6 01:48:53.196479 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 6 01:48:53.200135 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 6 01:48:53.202477 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 6 01:48:53.216414 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 6 01:48:53.225101 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 6 01:48:53.235783 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 6 01:48:53.239959 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 6 01:48:53.252297 dracut-cmdline[228]: dracut-dracut-053 Mar 6 01:48:53.256005 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a6bcd99e714cc2f1b95dc0d61d9d762252de26a434f12074c16f59200c97ba9c Mar 6 01:48:53.313319 systemd-resolved[231]: Positive Trust Anchors: Mar 6 01:48:53.313362 systemd-resolved[231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 6 01:48:53.313405 systemd-resolved[231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 6 01:48:53.316888 systemd-resolved[231]: Defaulting to hostname 'linux'. Mar 6 01:48:53.318509 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 6 01:48:53.328065 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 6 01:48:53.375319 kernel: SCSI subsystem initialized Mar 6 01:48:53.385278 kernel: Loading iSCSI transport class v2.0-870. Mar 6 01:48:53.399313 kernel: iscsi: registered transport (tcp) Mar 6 01:48:53.421351 kernel: iscsi: registered transport (qla4xxx) Mar 6 01:48:53.421458 kernel: QLogic iSCSI HBA Driver Mar 6 01:48:53.482937 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 6 01:48:53.498443 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 6 01:48:53.534309 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 6 01:48:53.534386 kernel: device-mapper: uevent: version 1.0.3 Mar 6 01:48:53.537291 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 6 01:48:53.590333 kernel: raid6: avx2x4 gen() 29293 MB/s Mar 6 01:48:53.608323 kernel: raid6: avx2x2 gen() 28458 MB/s Mar 6 01:48:53.627641 kernel: raid6: avx2x1 gen() 23915 MB/s Mar 6 01:48:53.627695 kernel: raid6: using algorithm avx2x4 gen() 29293 MB/s Mar 6 01:48:53.647404 kernel: raid6: .... xor() 4613 MB/s, rmw enabled Mar 6 01:48:53.647491 kernel: raid6: using avx2x2 recovery algorithm Mar 6 01:48:53.675299 kernel: xor: automatically using best checksumming function avx Mar 6 01:48:53.853311 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 6 01:48:53.873071 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 6 01:48:53.884773 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 6 01:48:53.928847 systemd-udevd[414]: Using default interface naming scheme 'v255'. Mar 6 01:48:53.934674 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 6 01:48:53.962572 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 6 01:48:53.983485 dracut-pre-trigger[426]: rd.md=0: removing MD RAID activation Mar 6 01:48:54.038284 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 6 01:48:54.057546 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 6 01:48:54.168081 kernel: hrtimer: interrupt took 6695139 ns Mar 6 01:48:54.227342 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 6 01:48:54.243702 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 6 01:48:54.279311 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 6 01:48:54.295536 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 6 01:48:54.311076 kernel: cryptd: max_cpu_qlen set to 1000 Mar 6 01:48:54.307129 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 6 01:48:54.319561 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 6 01:48:54.331731 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 6 01:48:54.345467 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 6 01:48:54.345775 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 6 01:48:54.349877 kernel: GPT:9289727 != 19775487 Mar 6 01:48:54.349903 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 6 01:48:54.352355 kernel: GPT:9289727 != 19775487 Mar 6 01:48:54.353568 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 6 01:48:54.356316 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 6 01:48:54.363335 kernel: AVX2 version of gcm_enc/dec engaged. Mar 6 01:48:54.363656 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 6 01:48:54.384969 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 6 01:48:54.388731 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 6 01:48:54.406466 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 6 01:48:54.426455 kernel: BTRFS: device fsid eccec0b1-0068-4620-ab61-f332f16460fa devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (460) Mar 6 01:48:54.426529 kernel: libata version 3.00 loaded. Mar 6 01:48:54.426771 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (473) Mar 6 01:48:54.425806 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 6 01:48:54.449513 kernel: AES CTR mode by8 optimization enabled Mar 6 01:48:54.426484 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 01:48:54.440495 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 01:48:54.458370 kernel: ahci 0000:00:1f.2: version 3.0 Mar 6 01:48:54.458731 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 6 01:48:54.466791 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 01:48:54.481402 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 6 01:48:54.481731 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 6 01:48:54.481889 kernel: scsi host0: ahci Mar 6 01:48:54.477448 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 6 01:48:54.500137 kernel: scsi host1: ahci Mar 6 01:48:54.500553 kernel: scsi host2: ahci Mar 6 01:48:54.500843 kernel: scsi host3: ahci Mar 6 01:48:54.501076 kernel: scsi host4: ahci Mar 6 01:48:54.511517 kernel: scsi host5: ahci Mar 6 01:48:54.511898 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Mar 6 01:48:54.511912 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Mar 6 01:48:54.511922 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Mar 6 01:48:54.511931 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Mar 6 01:48:54.509408 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 6 01:48:54.541175 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Mar 6 01:48:54.541266 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Mar 6 01:48:54.543004 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 6 01:48:54.560914 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 6 01:48:54.563098 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 6 01:48:54.570285 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 6 01:48:54.576199 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 01:48:54.604718 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 6 01:48:54.610310 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 6 01:48:54.631702 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 6 01:48:54.631730 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 6 01:48:54.631743 disk-uuid[560]: Primary Header is updated. Mar 6 01:48:54.631743 disk-uuid[560]: Secondary Entries is updated. Mar 6 01:48:54.631743 disk-uuid[560]: Secondary Header is updated. Mar 6 01:48:54.640642 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 6 01:48:54.664966 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 6 01:48:54.841689 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 6 01:48:54.841826 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 6 01:48:54.842337 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 6 01:48:54.846330 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 6 01:48:54.848264 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 6 01:48:54.853786 kernel: ata3.00: applying bridge limits Mar 6 01:48:54.854330 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 6 01:48:54.860302 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 6 01:48:54.863313 kernel: ata3.00: configured for UDMA/100 Mar 6 01:48:54.868308 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 6 01:48:54.921773 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 6 01:48:54.922175 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 6 01:48:54.943331 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 6 01:48:55.640299 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 6 01:48:55.640970 disk-uuid[562]: The operation has completed successfully. Mar 6 01:48:55.677854 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 6 01:48:55.678059 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 6 01:48:55.708552 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 6 01:48:55.716139 sh[596]: Success Mar 6 01:48:55.743284 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 6 01:48:55.797203 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 6 01:48:55.816373 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 6 01:48:55.820914 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 6 01:48:55.854705 kernel: BTRFS info (device dm-0): first mount of filesystem eccec0b1-0068-4620-ab61-f332f16460fa Mar 6 01:48:55.854758 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 6 01:48:55.854777 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 6 01:48:55.858171 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 6 01:48:55.860753 kernel: BTRFS info (device dm-0): using free space tree Mar 6 01:48:55.872973 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 6 01:48:55.875813 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 6 01:48:55.890483 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 6 01:48:55.895264 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 6 01:48:55.917825 kernel: BTRFS info (device vda6): first mount of filesystem dcd455b6-671f-4d9f-a5ce-de07977c88a5 Mar 6 01:48:55.917849 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 6 01:48:55.917859 kernel: BTRFS info (device vda6): using free space tree Mar 6 01:48:55.930459 kernel: BTRFS info (device vda6): auto enabling async discard Mar 6 01:48:55.947390 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 6 01:48:55.953312 kernel: BTRFS info (device vda6): last unmount of filesystem dcd455b6-671f-4d9f-a5ce-de07977c88a5 Mar 6 01:48:55.962075 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 6 01:48:55.974468 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 6 01:48:56.054658 ignition[694]: Ignition 2.19.0 Mar 6 01:48:56.055283 ignition[694]: Stage: fetch-offline Mar 6 01:48:56.055326 ignition[694]: no configs at "/usr/lib/ignition/base.d" Mar 6 01:48:56.055337 ignition[694]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 01:48:56.057056 ignition[694]: parsed url from cmdline: "" Mar 6 01:48:56.057061 ignition[694]: no config URL provided Mar 6 01:48:56.057067 ignition[694]: reading system config file "/usr/lib/ignition/user.ign" Mar 6 01:48:56.057076 ignition[694]: no config at "/usr/lib/ignition/user.ign" Mar 6 01:48:56.057102 ignition[694]: op(1): [started] loading QEMU firmware config module Mar 6 01:48:56.057107 ignition[694]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 6 01:48:56.070661 ignition[694]: op(1): [finished] loading QEMU firmware config module Mar 6 01:48:56.109983 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 6 01:48:56.147656 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 6 01:48:56.173669 systemd-networkd[784]: lo: Link UP Mar 6 01:48:56.173702 systemd-networkd[784]: lo: Gained carrier Mar 6 01:48:56.180873 systemd-networkd[784]: Enumeration completed Mar 6 01:48:56.181561 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 6 01:48:56.187730 systemd[1]: Reached target network.target - Network. Mar 6 01:48:56.199666 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 01:48:56.199693 systemd-networkd[784]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 6 01:48:56.213060 systemd-networkd[784]: eth0: Link UP Mar 6 01:48:56.213092 systemd-networkd[784]: eth0: Gained carrier Mar 6 01:48:56.213102 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 01:48:56.260381 systemd-networkd[784]: eth0: DHCPv4 address 10.0.0.139/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 6 01:48:56.362469 ignition[694]: parsing config with SHA512: 2f3f6a55178526af8f2a744635c5974ef10575293cab728aec38d6430bba88c842d48efcabfa8c2f4a41436b21eefcfceceed7af3dbf71da483ce34d44c43c5e Mar 6 01:48:56.368806 unknown[694]: fetched base config from "system" Mar 6 01:48:56.368818 unknown[694]: fetched user config from "qemu" Mar 6 01:48:56.369403 ignition[694]: fetch-offline: fetch-offline passed Mar 6 01:48:56.369466 ignition[694]: Ignition finished successfully Mar 6 01:48:56.381509 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 6 01:48:56.383542 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 6 01:48:56.396860 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 6 01:48:56.417691 ignition[788]: Ignition 2.19.0 Mar 6 01:48:56.417737 ignition[788]: Stage: kargs Mar 6 01:48:56.417989 ignition[788]: no configs at "/usr/lib/ignition/base.d" Mar 6 01:48:56.418011 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 01:48:56.419405 ignition[788]: kargs: kargs passed Mar 6 01:48:56.419467 ignition[788]: Ignition finished successfully Mar 6 01:48:56.442579 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 6 01:48:56.460589 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 6 01:48:56.480276 ignition[795]: Ignition 2.19.0 Mar 6 01:48:56.480307 ignition[795]: Stage: disks Mar 6 01:48:56.480474 ignition[795]: no configs at "/usr/lib/ignition/base.d" Mar 6 01:48:56.480486 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 01:48:56.482921 ignition[795]: disks: disks passed Mar 6 01:48:56.482964 ignition[795]: Ignition finished successfully Mar 6 01:48:56.496656 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 6 01:48:56.498881 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 6 01:48:56.504048 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 6 01:48:56.511887 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 6 01:48:56.519021 systemd[1]: Reached target sysinit.target - System Initialization. Mar 6 01:48:56.535698 systemd[1]: Reached target basic.target - Basic System. Mar 6 01:48:56.546485 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 6 01:48:56.564752 systemd-fsck[805]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 6 01:48:56.572156 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 6 01:48:56.595719 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 6 01:48:56.718340 kernel: EXT4-fs (vda9): mounted filesystem 6fb83788-0471-4e89-b45f-3a7586a627a9 r/w with ordered data mode. Quota mode: none. Mar 6 01:48:56.719396 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 6 01:48:56.723767 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 6 01:48:56.753407 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 6 01:48:56.766661 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (814) Mar 6 01:48:56.766685 kernel: BTRFS info (device vda6): first mount of filesystem dcd455b6-671f-4d9f-a5ce-de07977c88a5 Mar 6 01:48:56.757797 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 6 01:48:56.784619 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 6 01:48:56.784654 kernel: BTRFS info (device vda6): using free space tree Mar 6 01:48:56.784666 kernel: BTRFS info (device vda6): auto enabling async discard Mar 6 01:48:56.766055 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 6 01:48:56.766109 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 6 01:48:56.766139 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 6 01:48:56.786705 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 6 01:48:56.793001 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 6 01:48:56.813441 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 6 01:48:56.871032 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Mar 6 01:48:56.878049 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Mar 6 01:48:56.888817 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Mar 6 01:48:56.898902 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Mar 6 01:48:57.053344 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 6 01:48:57.070375 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 6 01:48:57.078937 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 6 01:48:57.087719 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 6 01:48:57.093521 kernel: BTRFS info (device vda6): last unmount of filesystem dcd455b6-671f-4d9f-a5ce-de07977c88a5 Mar 6 01:48:57.260702 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 6 01:48:57.307750 ignition[926]: INFO : Ignition 2.19.0 Mar 6 01:48:57.307750 ignition[926]: INFO : Stage: mount Mar 6 01:48:57.312821 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 6 01:48:57.312821 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 01:48:57.321303 ignition[926]: INFO : mount: mount passed Mar 6 01:48:57.326075 ignition[926]: INFO : Ignition finished successfully Mar 6 01:48:57.341700 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 6 01:48:57.364350 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 6 01:48:57.449878 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 6 01:48:57.465314 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (940) Mar 6 01:48:57.474147 kernel: BTRFS info (device vda6): first mount of filesystem dcd455b6-671f-4d9f-a5ce-de07977c88a5 Mar 6 01:48:57.474190 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 6 01:48:57.474207 kernel: BTRFS info (device vda6): using free space tree Mar 6 01:48:57.486317 kernel: BTRFS info (device vda6): auto enabling async discard Mar 6 01:48:57.488676 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 6 01:48:57.721848 ignition[957]: INFO : Ignition 2.19.0 Mar 6 01:48:57.721848 ignition[957]: INFO : Stage: files Mar 6 01:48:57.733020 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 6 01:48:57.733020 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 01:48:57.756570 systemd-networkd[784]: eth0: Gained IPv6LL Mar 6 01:48:57.763546 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Mar 6 01:48:57.767705 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 6 01:48:57.767705 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 6 01:48:57.776506 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 6 01:48:57.781354 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 6 01:48:57.785801 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 6 01:48:57.785801 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 6 01:48:57.785801 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 6 01:48:57.782324 unknown[957]: wrote ssh authorized keys file for user: core Mar 6 01:48:57.938061 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 6 01:48:58.249280 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 6 01:48:58.255323 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 6 01:48:58.255323 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 6 01:48:58.255323 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 6 01:48:58.255323 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 6 01:48:58.255323 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 6 01:48:58.255323 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 6 01:48:58.255323 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 6 01:48:58.255323 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 6 01:48:58.255323 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 6 01:48:58.255323 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 6 01:48:58.255323 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 6 01:48:58.255323 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 6 01:48:58.255323 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 6 01:48:58.255323 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Mar 6 01:48:58.618962 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 6 01:49:00.594299 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 6 01:49:00.594299 ignition[957]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 6 01:49:00.607426 ignition[957]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 6 01:49:00.617320 ignition[957]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 6 01:49:00.617320 ignition[957]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 6 01:49:00.617320 ignition[957]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 6 01:49:00.634010 ignition[957]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 6 01:49:00.640790 ignition[957]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 6 01:49:00.640790 ignition[957]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 6 01:49:00.650351 ignition[957]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 6 01:49:00.703059 ignition[957]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 6 01:49:00.855565 ignition[957]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 6 01:49:00.860815 ignition[957]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 6 01:49:00.860815 ignition[957]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 6 01:49:00.860815 ignition[957]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 6 01:49:00.860815 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 6 01:49:00.860815 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 6 01:49:00.860815 ignition[957]: INFO : files: files passed Mar 6 01:49:00.860815 ignition[957]: INFO : Ignition finished successfully Mar 6 01:49:00.894888 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 6 01:49:00.908699 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 6 01:49:00.912917 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 6 01:49:00.919123 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 6 01:49:00.919312 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 6 01:49:00.945193 initrd-setup-root-after-ignition[984]: grep: /sysroot/oem/oem-release: No such file or directory Mar 6 01:49:00.938976 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 6 01:49:00.956794 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 6 01:49:00.956794 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 6 01:49:00.945878 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 6 01:49:00.969302 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 6 01:49:00.966472 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 6 01:49:01.004812 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 6 01:49:01.008779 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 6 01:49:01.017119 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 6 01:49:01.024450 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 6 01:49:01.030964 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 6 01:49:01.045434 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 6 01:49:01.063197 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 6 01:49:01.084092 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 6 01:49:01.107674 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 6 01:49:01.116572 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 6 01:49:01.128363 systemd[1]: Stopped target timers.target - Timer Units. Mar 6 01:49:01.140507 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 6 01:49:01.145539 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 6 01:49:01.154032 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 6 01:49:01.160806 systemd[1]: Stopped target basic.target - Basic System. Mar 6 01:49:01.167351 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 6 01:49:01.173971 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 6 01:49:01.183409 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 6 01:49:01.192476 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 6 01:49:01.199698 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 6 01:49:01.208906 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 6 01:49:01.214895 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 6 01:49:01.220861 systemd[1]: Stopped target swap.target - Swaps. Mar 6 01:49:01.227526 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 6 01:49:01.230314 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 6 01:49:01.236688 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 6 01:49:01.242725 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 6 01:49:01.249531 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 6 01:49:01.252252 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 6 01:49:01.259495 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 6 01:49:01.262251 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 6 01:49:01.268413 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 6 01:49:01.271391 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 6 01:49:01.278120 systemd[1]: Stopped target paths.target - Path Units. Mar 6 01:49:01.283288 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 6 01:49:01.286514 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 6 01:49:01.294132 systemd[1]: Stopped target slices.target - Slice Units. Mar 6 01:49:01.299371 systemd[1]: Stopped target sockets.target - Socket Units. Mar 6 01:49:01.304638 systemd[1]: iscsid.socket: Deactivated successfully. Mar 6 01:49:01.307160 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 6 01:49:01.313311 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 6 01:49:01.315995 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 6 01:49:01.321893 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 6 01:49:01.327388 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 6 01:49:01.335733 systemd[1]: ignition-files.service: Deactivated successfully. Mar 6 01:49:01.338412 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 6 01:49:01.358527 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 6 01:49:01.367086 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 6 01:49:01.372579 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 6 01:49:01.375666 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 6 01:49:01.379477 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 6 01:49:01.379653 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 6 01:49:01.398733 ignition[1011]: INFO : Ignition 2.19.0 Mar 6 01:49:01.398733 ignition[1011]: INFO : Stage: umount Mar 6 01:49:01.404383 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 6 01:49:01.404383 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 01:49:01.404383 ignition[1011]: INFO : umount: umount passed Mar 6 01:49:01.404383 ignition[1011]: INFO : Ignition finished successfully Mar 6 01:49:01.421383 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 6 01:49:01.429096 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 6 01:49:01.432819 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 6 01:49:01.440767 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 6 01:49:01.443503 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 6 01:49:01.451393 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 6 01:49:01.454199 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 6 01:49:01.461703 systemd[1]: Stopped target network.target - Network. Mar 6 01:49:01.464666 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 6 01:49:01.467686 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 6 01:49:01.477711 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 6 01:49:01.477785 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 6 01:49:01.487334 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 6 01:49:01.487426 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 6 01:49:01.496077 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 6 01:49:01.496171 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 6 01:49:01.505809 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 6 01:49:01.505934 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 6 01:49:01.516360 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 6 01:49:01.526121 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 6 01:49:01.531293 systemd-networkd[784]: eth0: DHCPv6 lease lost Mar 6 01:49:01.539801 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 6 01:49:01.543407 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 6 01:49:01.551576 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 6 01:49:01.554542 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 6 01:49:01.562010 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 6 01:49:01.562107 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 6 01:49:01.582433 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 6 01:49:01.585486 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 6 01:49:01.585549 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 6 01:49:01.591813 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 6 01:49:01.591877 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 6 01:49:01.597910 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 6 01:49:01.597959 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 6 01:49:01.604775 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 6 01:49:01.604824 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 6 01:49:01.608946 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 6 01:49:01.628817 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 6 01:49:01.629086 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 6 01:49:01.634856 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 6 01:49:01.634951 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 6 01:49:01.640756 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 6 01:49:01.640797 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 6 01:49:01.647741 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 6 01:49:01.647794 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 6 01:49:01.650895 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 6 01:49:01.650946 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 6 01:49:01.656580 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 6 01:49:01.656668 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 6 01:49:01.674558 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 6 01:49:01.679661 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 6 01:49:01.679755 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 6 01:49:01.686480 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 6 01:49:01.686557 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 6 01:49:01.693384 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 6 01:49:01.693458 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 6 01:49:01.700444 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 6 01:49:01.700534 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 01:49:01.708346 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 6 01:49:01.708544 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 6 01:49:01.715394 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 6 01:49:01.715671 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 6 01:49:01.733468 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 6 01:49:01.757791 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 6 01:49:01.775865 systemd[1]: Switching root. Mar 6 01:49:01.821965 systemd-journald[194]: Journal stopped Mar 6 01:49:03.391342 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Mar 6 01:49:03.391440 kernel: SELinux: policy capability network_peer_controls=1 Mar 6 01:49:03.391460 kernel: SELinux: policy capability open_perms=1 Mar 6 01:49:03.391477 kernel: SELinux: policy capability extended_socket_class=1 Mar 6 01:49:03.391492 kernel: SELinux: policy capability always_check_network=0 Mar 6 01:49:03.391508 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 6 01:49:03.391526 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 6 01:49:03.391542 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 6 01:49:03.391558 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 6 01:49:03.391574 kernel: audit: type=1403 audit(1772761742.010:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 6 01:49:03.391600 systemd[1]: Successfully loaded SELinux policy in 52.406ms. Mar 6 01:49:03.391684 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.174ms. Mar 6 01:49:03.391702 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 6 01:49:03.391720 systemd[1]: Detected virtualization kvm. Mar 6 01:49:03.391737 systemd[1]: Detected architecture x86-64. Mar 6 01:49:03.391753 systemd[1]: Detected first boot. Mar 6 01:49:03.391770 systemd[1]: Initializing machine ID from VM UUID. Mar 6 01:49:03.391787 zram_generator::config[1055]: No configuration found. Mar 6 01:49:03.391810 systemd[1]: Populated /etc with preset unit settings. Mar 6 01:49:03.391827 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 6 01:49:03.391844 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 6 01:49:03.391861 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 6 01:49:03.391879 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 6 01:49:03.391896 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 6 01:49:03.391913 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 6 01:49:03.391931 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 6 01:49:03.391950 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 6 01:49:03.391979 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 6 01:49:03.391997 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 6 01:49:03.392014 systemd[1]: Created slice user.slice - User and Session Slice. Mar 6 01:49:03.392031 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 6 01:49:03.392054 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 6 01:49:03.392071 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 6 01:49:03.392088 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 6 01:49:03.392105 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 6 01:49:03.392125 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 6 01:49:03.392149 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 6 01:49:03.392167 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 6 01:49:03.392184 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 6 01:49:03.392201 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 6 01:49:03.392403 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 6 01:49:03.392424 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 6 01:49:03.392441 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 6 01:49:03.392469 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 6 01:49:03.392486 systemd[1]: Reached target slices.target - Slice Units. Mar 6 01:49:03.392503 systemd[1]: Reached target swap.target - Swaps. Mar 6 01:49:03.392520 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 6 01:49:03.392537 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 6 01:49:03.392554 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 6 01:49:03.392570 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 6 01:49:03.392588 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 6 01:49:03.392655 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 6 01:49:03.392679 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 6 01:49:03.392697 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 6 01:49:03.392721 systemd[1]: Mounting media.mount - External Media Directory... Mar 6 01:49:03.392737 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 01:49:03.392754 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 6 01:49:03.392771 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 6 01:49:03.392840 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 6 01:49:03.392861 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 6 01:49:03.392879 systemd[1]: Reached target machines.target - Containers. Mar 6 01:49:03.392901 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 6 01:49:03.392917 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 6 01:49:03.392936 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 6 01:49:03.392954 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 6 01:49:03.392975 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 6 01:49:03.392993 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 6 01:49:03.393011 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 6 01:49:03.393028 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 6 01:49:03.393048 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 6 01:49:03.393066 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 6 01:49:03.393083 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 6 01:49:03.393101 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 6 01:49:03.393117 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 6 01:49:03.393134 systemd[1]: Stopped systemd-fsck-usr.service. Mar 6 01:49:03.393151 kernel: fuse: init (API version 7.39) Mar 6 01:49:03.393167 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 6 01:49:03.393183 kernel: loop: module loaded Mar 6 01:49:03.393203 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 6 01:49:03.393279 kernel: ACPI: bus type drm_connector registered Mar 6 01:49:03.393328 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 6 01:49:03.393346 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 6 01:49:03.393392 systemd-journald[1139]: Collecting audit messages is disabled. Mar 6 01:49:03.393426 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 6 01:49:03.393443 systemd-journald[1139]: Journal started Mar 6 01:49:03.393477 systemd-journald[1139]: Runtime Journal (/run/log/journal/2099ae4a55164ba9896c233ed7f9d40d) is 6.0M, max 48.3M, 42.2M free. Mar 6 01:49:02.744411 systemd[1]: Queued start job for default target multi-user.target. Mar 6 01:49:02.786486 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 6 01:49:02.787464 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 6 01:49:02.788469 systemd[1]: systemd-journald.service: Consumed 1.886s CPU time. Mar 6 01:49:03.403275 systemd[1]: verity-setup.service: Deactivated successfully. Mar 6 01:49:03.403323 systemd[1]: Stopped verity-setup.service. Mar 6 01:49:03.414364 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 01:49:03.429285 systemd[1]: Started systemd-journald.service - Journal Service. Mar 6 01:49:03.436395 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 6 01:49:03.443180 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 6 01:49:03.449921 systemd[1]: Mounted media.mount - External Media Directory. Mar 6 01:49:03.453650 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 6 01:49:03.458054 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 6 01:49:03.461975 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 6 01:49:03.466164 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 6 01:49:03.471702 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 6 01:49:03.476340 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 6 01:49:03.476555 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 6 01:49:03.482747 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 6 01:49:03.483052 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 6 01:49:03.487749 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 6 01:49:03.487973 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 6 01:49:03.491737 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 6 01:49:03.491952 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 6 01:49:03.497559 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 6 01:49:03.497818 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 6 01:49:03.503276 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 6 01:49:03.503489 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 6 01:49:03.507889 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 6 01:49:03.512324 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 6 01:49:03.516808 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 6 01:49:03.540999 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 6 01:49:03.558662 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 6 01:49:03.563520 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 6 01:49:03.566717 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 6 01:49:03.566747 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 6 01:49:03.570878 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 6 01:49:03.576294 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 6 01:49:03.582273 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 6 01:49:03.585753 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 6 01:49:03.589165 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 6 01:49:03.594531 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 6 01:49:03.600280 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 6 01:49:03.602404 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 6 01:49:03.607704 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 6 01:49:03.609336 systemd-journald[1139]: Time spent on flushing to /var/log/journal/2099ae4a55164ba9896c233ed7f9d40d is 61.315ms for 982 entries. Mar 6 01:49:03.609336 systemd-journald[1139]: System Journal (/var/log/journal/2099ae4a55164ba9896c233ed7f9d40d) is 8.0M, max 195.6M, 187.6M free. Mar 6 01:49:03.818946 systemd-journald[1139]: Received client request to flush runtime journal. Mar 6 01:49:03.818992 kernel: loop0: detected capacity change from 0 to 142488 Mar 6 01:49:03.611377 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 6 01:49:03.625446 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 6 01:49:03.654512 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 6 01:49:03.672113 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 6 01:49:03.677072 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 6 01:49:03.681521 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 6 01:49:03.685791 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 6 01:49:03.690452 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 6 01:49:03.788164 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 6 01:49:03.805740 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 6 01:49:03.813440 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 6 01:49:03.824789 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 6 01:49:03.834982 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 6 01:49:03.856310 udevadm[1180]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 6 01:49:03.861892 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 6 01:49:03.863742 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 6 01:49:03.877269 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 6 01:49:03.887479 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Mar 6 01:49:03.887498 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Mar 6 01:49:03.895709 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 6 01:49:03.906358 kernel: loop1: detected capacity change from 0 to 219192 Mar 6 01:49:03.914490 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 6 01:49:04.128288 kernel: loop2: detected capacity change from 0 to 140768 Mar 6 01:49:04.157721 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 6 01:49:04.169534 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 6 01:49:04.187661 kernel: loop3: detected capacity change from 0 to 142488 Mar 6 01:49:04.223439 kernel: loop4: detected capacity change from 0 to 219192 Mar 6 01:49:04.243206 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Mar 6 01:49:04.243287 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Mar 6 01:49:04.247021 kernel: loop5: detected capacity change from 0 to 140768 Mar 6 01:49:04.254437 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 6 01:49:04.472399 (sd-merge)[1196]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 6 01:49:04.473330 (sd-merge)[1196]: Merged extensions into '/usr'. Mar 6 01:49:04.481035 systemd[1]: Reloading requested from client PID 1169 ('systemd-sysext') (unit systemd-sysext.service)... Mar 6 01:49:04.481077 systemd[1]: Reloading... Mar 6 01:49:04.581269 zram_generator::config[1225]: No configuration found. Mar 6 01:49:05.029163 ldconfig[1164]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 6 01:49:05.045432 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 6 01:49:05.103567 systemd[1]: Reloading finished in 621 ms. Mar 6 01:49:05.147591 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 6 01:49:05.153017 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 6 01:49:05.179847 systemd[1]: Starting ensure-sysext.service... Mar 6 01:49:05.187700 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 6 01:49:05.195817 systemd[1]: Reloading requested from client PID 1261 ('systemctl') (unit ensure-sysext.service)... Mar 6 01:49:05.195870 systemd[1]: Reloading... Mar 6 01:49:05.221923 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 6 01:49:05.223786 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 6 01:49:05.225421 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 6 01:49:05.225923 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Mar 6 01:49:05.226098 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Mar 6 01:49:05.232099 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Mar 6 01:49:05.232134 systemd-tmpfiles[1262]: Skipping /boot Mar 6 01:49:05.253157 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Mar 6 01:49:05.253179 systemd-tmpfiles[1262]: Skipping /boot Mar 6 01:49:05.281309 zram_generator::config[1288]: No configuration found. Mar 6 01:49:05.411203 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 6 01:49:05.467982 systemd[1]: Reloading finished in 271 ms. Mar 6 01:49:05.494574 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 6 01:49:05.510692 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 6 01:49:05.538814 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 6 01:49:05.544956 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 6 01:49:05.551186 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 6 01:49:05.557504 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 6 01:49:05.563561 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 6 01:49:05.572358 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 6 01:49:05.580587 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 01:49:05.580875 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 6 01:49:05.588518 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 6 01:49:05.598322 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 6 01:49:05.602864 augenrules[1350]: No rules Mar 6 01:49:05.604763 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 6 01:49:05.608753 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 6 01:49:05.613803 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 6 01:49:05.617704 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 01:49:05.619939 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 6 01:49:05.625291 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 6 01:49:05.630027 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 6 01:49:05.630275 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 6 01:49:05.634724 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 6 01:49:05.634894 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 6 01:49:05.635825 systemd-udevd[1340]: Using default interface naming scheme 'v255'. Mar 6 01:49:05.640189 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 6 01:49:05.640436 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 6 01:49:05.652777 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 01:49:05.653020 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 6 01:49:05.658859 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 6 01:49:05.668488 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 6 01:49:05.677681 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 6 01:49:05.682128 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 6 01:49:05.687932 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 6 01:49:05.692422 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 01:49:05.694112 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 6 01:49:05.699750 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 6 01:49:05.704560 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 6 01:49:05.710008 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 6 01:49:05.713814 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 6 01:49:05.720543 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 6 01:49:05.720875 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 6 01:49:05.726770 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 6 01:49:05.727025 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 6 01:49:05.737890 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 6 01:49:05.743730 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 6 01:49:05.772997 systemd[1]: Finished ensure-sysext.service. Mar 6 01:49:05.779788 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 6 01:49:05.780071 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 01:49:05.780361 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 6 01:49:05.791588 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 6 01:49:05.797534 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 6 01:49:05.801888 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 6 01:49:05.813267 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1369) Mar 6 01:49:05.817152 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 6 01:49:05.820928 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 6 01:49:05.831492 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 6 01:49:05.845509 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 6 01:49:05.849870 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 6 01:49:05.849903 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 01:49:05.850584 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 6 01:49:05.851018 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 6 01:49:05.855483 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 6 01:49:05.855717 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 6 01:49:05.863415 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 6 01:49:05.863784 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 6 01:49:05.868071 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 6 01:49:05.868532 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 6 01:49:05.870199 systemd-resolved[1337]: Positive Trust Anchors: Mar 6 01:49:05.870319 systemd-resolved[1337]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 6 01:49:05.870350 systemd-resolved[1337]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 6 01:49:05.882668 systemd-resolved[1337]: Defaulting to hostname 'linux'. Mar 6 01:49:05.885395 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 6 01:49:05.912426 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 6 01:49:05.917093 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 6 01:49:05.928409 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 6 01:49:05.939582 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 6 01:49:05.944638 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 6 01:49:05.944770 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 6 01:49:05.977326 kernel: ACPI: button: Power Button [PWRF] Mar 6 01:49:05.984341 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Mar 6 01:49:05.983921 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 6 01:49:05.991406 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 6 01:49:06.001409 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 6 01:49:06.001669 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 6 01:49:06.000712 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 6 01:49:06.005565 systemd[1]: Reached target time-set.target - System Time Set. Mar 6 01:49:06.016775 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 6 01:49:06.023168 systemd-networkd[1405]: lo: Link UP Mar 6 01:49:06.023203 systemd-networkd[1405]: lo: Gained carrier Mar 6 01:49:06.025029 systemd-networkd[1405]: Enumeration completed Mar 6 01:49:06.025137 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 6 01:49:06.026733 systemd-networkd[1405]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 01:49:06.026760 systemd-networkd[1405]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 6 01:49:06.028519 systemd[1]: Reached target network.target - Network. Mar 6 01:49:06.030350 systemd-networkd[1405]: eth0: Link UP Mar 6 01:49:06.030355 systemd-networkd[1405]: eth0: Gained carrier Mar 6 01:49:06.030368 systemd-networkd[1405]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 01:49:06.045681 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 6 01:49:06.072805 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 01:49:06.093377 systemd-networkd[1405]: eth0: DHCPv4 address 10.0.0.139/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 6 01:49:06.097726 systemd-timesyncd[1407]: Network configuration changed, trying to establish connection. Mar 6 01:49:07.351587 systemd-resolved[1337]: Clock change detected. Flushing caches. Mar 6 01:49:07.352868 systemd-timesyncd[1407]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 6 01:49:07.353184 systemd-timesyncd[1407]: Initial clock synchronization to Fri 2026-03-06 01:49:07.351398 UTC. Mar 6 01:49:07.394355 kernel: mousedev: PS/2 mouse device common for all mice Mar 6 01:49:07.420052 kernel: kvm_amd: TSC scaling supported Mar 6 01:49:07.420131 kernel: kvm_amd: Nested Virtualization enabled Mar 6 01:49:07.420150 kernel: kvm_amd: Nested Paging enabled Mar 6 01:49:07.423266 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 6 01:49:07.423291 kernel: kvm_amd: PMU virtualization is disabled Mar 6 01:49:07.485333 kernel: EDAC MC: Ver: 3.0.0 Mar 6 01:49:07.494687 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 01:49:07.532529 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 6 01:49:07.546518 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 6 01:49:07.556564 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 6 01:49:07.592838 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 6 01:49:07.596987 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 6 01:49:07.600438 systemd[1]: Reached target sysinit.target - System Initialization. Mar 6 01:49:07.603828 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 6 01:49:07.607558 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 6 01:49:07.611503 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 6 01:49:07.614585 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 6 01:49:07.618159 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 6 01:49:07.621710 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 6 01:49:07.621766 systemd[1]: Reached target paths.target - Path Units. Mar 6 01:49:07.624397 systemd[1]: Reached target timers.target - Timer Units. Mar 6 01:49:07.628112 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 6 01:49:07.633030 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 6 01:49:07.649921 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 6 01:49:07.655587 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 6 01:49:07.659531 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 6 01:49:07.663017 systemd[1]: Reached target sockets.target - Socket Units. Mar 6 01:49:07.666047 systemd[1]: Reached target basic.target - Basic System. Mar 6 01:49:07.667917 lvm[1437]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 6 01:49:07.669087 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 6 01:49:07.669140 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 6 01:49:07.670666 systemd[1]: Starting containerd.service - containerd container runtime... Mar 6 01:49:07.675191 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 6 01:49:07.681360 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 6 01:49:07.685845 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 6 01:49:07.689307 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 6 01:49:07.691135 jq[1440]: false Mar 6 01:49:07.691443 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 6 01:49:07.694345 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 6 01:49:07.698434 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 6 01:49:07.703128 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 6 01:49:07.717365 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 6 01:49:07.721402 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 6 01:49:07.721963 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 6 01:49:07.725396 systemd[1]: Starting update-engine.service - Update Engine... Mar 6 01:49:07.728759 extend-filesystems[1441]: Found loop3 Mar 6 01:49:07.728759 extend-filesystems[1441]: Found loop4 Mar 6 01:49:07.735742 extend-filesystems[1441]: Found loop5 Mar 6 01:49:07.735742 extend-filesystems[1441]: Found sr0 Mar 6 01:49:07.735742 extend-filesystems[1441]: Found vda Mar 6 01:49:07.735742 extend-filesystems[1441]: Found vda1 Mar 6 01:49:07.735742 extend-filesystems[1441]: Found vda2 Mar 6 01:49:07.735742 extend-filesystems[1441]: Found vda3 Mar 6 01:49:07.735742 extend-filesystems[1441]: Found usr Mar 6 01:49:07.735742 extend-filesystems[1441]: Found vda4 Mar 6 01:49:07.735742 extend-filesystems[1441]: Found vda6 Mar 6 01:49:07.735742 extend-filesystems[1441]: Found vda7 Mar 6 01:49:07.735742 extend-filesystems[1441]: Found vda9 Mar 6 01:49:07.735742 extend-filesystems[1441]: Checking size of /dev/vda9 Mar 6 01:49:07.823408 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 6 01:49:07.823478 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1378) Mar 6 01:49:07.755974 dbus-daemon[1439]: [system] SELinux support is enabled Mar 6 01:49:07.756368 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 6 01:49:07.824134 extend-filesystems[1441]: Resized partition /dev/vda9 Mar 6 01:49:07.831902 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 6 01:49:07.787866 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 6 01:49:07.832039 extend-filesystems[1462]: resize2fs 1.47.1 (20-May-2024) Mar 6 01:49:07.843684 update_engine[1453]: I20260306 01:49:07.838968 1453 main.cc:92] Flatcar Update Engine starting Mar 6 01:49:07.843684 update_engine[1453]: I20260306 01:49:07.841977 1453 update_check_scheduler.cc:74] Next update check in 7m15s Mar 6 01:49:07.810191 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 6 01:49:07.859386 jq[1461]: true Mar 6 01:49:07.839663 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 6 01:49:07.861277 extend-filesystems[1462]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 6 01:49:07.861277 extend-filesystems[1462]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 6 01:49:07.861277 extend-filesystems[1462]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 6 01:49:07.839960 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 6 01:49:07.885713 extend-filesystems[1441]: Resized filesystem in /dev/vda9 Mar 6 01:49:07.840513 systemd[1]: motdgen.service: Deactivated successfully. Mar 6 01:49:07.840786 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 6 01:49:07.860186 systemd-logind[1447]: Watching system buttons on /dev/input/event1 (Power Button) Mar 6 01:49:07.860297 systemd-logind[1447]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 6 01:49:07.861180 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 6 01:49:07.861678 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 6 01:49:07.861973 systemd-logind[1447]: New seat seat0. Mar 6 01:49:07.872143 systemd[1]: Started systemd-logind.service - User Login Management. Mar 6 01:49:07.877022 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 6 01:49:07.885642 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 6 01:49:07.909430 jq[1468]: true Mar 6 01:49:07.911709 (ntainerd)[1469]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 6 01:49:07.923444 tar[1465]: linux-amd64/LICENSE Mar 6 01:49:07.923444 tar[1465]: linux-amd64/helm Mar 6 01:49:07.923289 dbus-daemon[1439]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 6 01:49:07.934054 systemd[1]: Started update-engine.service - Update Engine. Mar 6 01:49:07.940115 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 6 01:49:07.940469 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 6 01:49:07.945438 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 6 01:49:07.945544 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 6 01:49:07.959454 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 6 01:49:07.976020 bash[1495]: Updated "/home/core/.ssh/authorized_keys" Mar 6 01:49:07.977077 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 6 01:49:07.985990 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 6 01:49:08.022411 locksmithd[1491]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 6 01:49:08.045699 sshd_keygen[1458]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 6 01:49:08.077723 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 6 01:49:08.089712 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 6 01:49:08.100968 systemd[1]: issuegen.service: Deactivated successfully. Mar 6 01:49:08.101285 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 6 01:49:08.114814 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 6 01:49:08.128105 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 6 01:49:08.141769 containerd[1469]: time="2026-03-06T01:49:08.141704019Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 6 01:49:08.155059 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 6 01:49:08.161294 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 6 01:49:08.165808 systemd[1]: Reached target getty.target - Login Prompts. Mar 6 01:49:08.170383 containerd[1469]: time="2026-03-06T01:49:08.170286035Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 6 01:49:08.173116 containerd[1469]: time="2026-03-06T01:49:08.173080577Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 6 01:49:08.173185 containerd[1469]: time="2026-03-06T01:49:08.173170926Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 6 01:49:08.173312 containerd[1469]: time="2026-03-06T01:49:08.173298083Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 6 01:49:08.173513 containerd[1469]: time="2026-03-06T01:49:08.173497175Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 6 01:49:08.173578 containerd[1469]: time="2026-03-06T01:49:08.173566023Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 6 01:49:08.173735 containerd[1469]: time="2026-03-06T01:49:08.173718057Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 6 01:49:08.173790 containerd[1469]: time="2026-03-06T01:49:08.173779311Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 6 01:49:08.174050 containerd[1469]: time="2026-03-06T01:49:08.174030491Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 6 01:49:08.174101 containerd[1469]: time="2026-03-06T01:49:08.174089761Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 6 01:49:08.174155 containerd[1469]: time="2026-03-06T01:49:08.174142760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 6 01:49:08.174195 containerd[1469]: time="2026-03-06T01:49:08.174185089Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 6 01:49:08.174463 containerd[1469]: time="2026-03-06T01:49:08.174442139Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 6 01:49:08.174840 containerd[1469]: time="2026-03-06T01:49:08.174820596Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 6 01:49:08.175036 containerd[1469]: time="2026-03-06T01:49:08.175017192Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 6 01:49:08.175085 containerd[1469]: time="2026-03-06T01:49:08.175073537Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 6 01:49:08.175330 containerd[1469]: time="2026-03-06T01:49:08.175305651Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 6 01:49:08.175461 containerd[1469]: time="2026-03-06T01:49:08.175445191Z" level=info msg="metadata content store policy set" policy=shared Mar 6 01:49:08.182600 containerd[1469]: time="2026-03-06T01:49:08.182580403Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 6 01:49:08.182717 containerd[1469]: time="2026-03-06T01:49:08.182703433Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 6 01:49:08.182820 containerd[1469]: time="2026-03-06T01:49:08.182806645Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 6 01:49:08.182871 containerd[1469]: time="2026-03-06T01:49:08.182859904Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 6 01:49:08.182917 containerd[1469]: time="2026-03-06T01:49:08.182906311Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 6 01:49:08.183093 containerd[1469]: time="2026-03-06T01:49:08.183076579Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 6 01:49:08.183375 containerd[1469]: time="2026-03-06T01:49:08.183358475Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 6 01:49:08.183537 containerd[1469]: time="2026-03-06T01:49:08.183521570Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 6 01:49:08.183647 containerd[1469]: time="2026-03-06T01:49:08.183590048Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 6 01:49:08.183704 containerd[1469]: time="2026-03-06T01:49:08.183690996Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 6 01:49:08.183748 containerd[1469]: time="2026-03-06T01:49:08.183737283Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 6 01:49:08.183801 containerd[1469]: time="2026-03-06T01:49:08.183788848Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 6 01:49:08.183844 containerd[1469]: time="2026-03-06T01:49:08.183833462Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 6 01:49:08.184182 containerd[1469]: time="2026-03-06T01:49:08.183875961Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 6 01:49:08.184182 containerd[1469]: time="2026-03-06T01:49:08.183891951Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 6 01:49:08.184182 containerd[1469]: time="2026-03-06T01:49:08.183903653Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 6 01:49:08.184182 containerd[1469]: time="2026-03-06T01:49:08.183919984Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 6 01:49:08.184182 containerd[1469]: time="2026-03-06T01:49:08.183931305Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 6 01:49:08.184182 containerd[1469]: time="2026-03-06T01:49:08.183952664Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 6 01:49:08.184182 containerd[1469]: time="2026-03-06T01:49:08.183964547Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 6 01:49:08.184182 containerd[1469]: time="2026-03-06T01:49:08.183978242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 6 01:49:08.184182 containerd[1469]: time="2026-03-06T01:49:08.183989142Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 6 01:49:08.184182 containerd[1469]: time="2026-03-06T01:49:08.183999792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 6 01:49:08.184182 containerd[1469]: time="2026-03-06T01:49:08.184011344Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 6 01:49:08.184182 containerd[1469]: time="2026-03-06T01:49:08.184021704Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 6 01:49:08.184182 containerd[1469]: time="2026-03-06T01:49:08.184035159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 6 01:49:08.184182 containerd[1469]: time="2026-03-06T01:49:08.184045768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 6 01:49:08.184462 containerd[1469]: time="2026-03-06T01:49:08.184057781Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 6 01:49:08.184462 containerd[1469]: time="2026-03-06T01:49:08.184068221Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 6 01:49:08.184462 containerd[1469]: time="2026-03-06T01:49:08.184077948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 6 01:49:08.184462 containerd[1469]: time="2026-03-06T01:49:08.184087837Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 6 01:49:08.184462 containerd[1469]: time="2026-03-06T01:49:08.184100731Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 6 01:49:08.184462 containerd[1469]: time="2026-03-06T01:49:08.184117643Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 6 01:49:08.184462 containerd[1469]: time="2026-03-06T01:49:08.184128352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 6 01:49:08.184462 containerd[1469]: time="2026-03-06T01:49:08.184137610Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 6 01:49:08.184740 containerd[1469]: time="2026-03-06T01:49:08.184722822Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 6 01:49:08.184797 containerd[1469]: time="2026-03-06T01:49:08.184784247Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 6 01:49:08.184837 containerd[1469]: time="2026-03-06T01:49:08.184826596Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 6 01:49:08.184878 containerd[1469]: time="2026-03-06T01:49:08.184867502Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 6 01:49:08.185774 containerd[1469]: time="2026-03-06T01:49:08.184957010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 6 01:49:08.185774 containerd[1469]: time="2026-03-06T01:49:08.184973921Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 6 01:49:08.185774 containerd[1469]: time="2026-03-06T01:49:08.184983970Z" level=info msg="NRI interface is disabled by configuration." Mar 6 01:49:08.185774 containerd[1469]: time="2026-03-06T01:49:08.184999489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 6 01:49:08.185853 containerd[1469]: time="2026-03-06T01:49:08.185198841Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 6 01:49:08.185853 containerd[1469]: time="2026-03-06T01:49:08.185302064Z" level=info msg="Connect containerd service" Mar 6 01:49:08.185853 containerd[1469]: time="2026-03-06T01:49:08.185330938Z" level=info msg="using legacy CRI server" Mar 6 01:49:08.185853 containerd[1469]: time="2026-03-06T01:49:08.185337079Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 6 01:49:08.185853 containerd[1469]: time="2026-03-06T01:49:08.185435874Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 6 01:49:08.186573 containerd[1469]: time="2026-03-06T01:49:08.186490272Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 6 01:49:08.186801 containerd[1469]: time="2026-03-06T01:49:08.186733190Z" level=info msg="Start subscribing containerd event" Mar 6 01:49:08.186801 containerd[1469]: time="2026-03-06T01:49:08.186798302Z" level=info msg="Start recovering state" Mar 6 01:49:08.186913 containerd[1469]: time="2026-03-06T01:49:08.186876799Z" level=info msg="Start event monitor" Mar 6 01:49:08.187004 containerd[1469]: time="2026-03-06T01:49:08.186932513Z" level=info msg="Start snapshots syncer" Mar 6 01:49:08.187004 containerd[1469]: time="2026-03-06T01:49:08.186962469Z" level=info msg="Start cni network conf syncer for default" Mar 6 01:49:08.187004 containerd[1469]: time="2026-03-06T01:49:08.186970383Z" level=info msg="Start streaming server" Mar 6 01:49:08.187004 containerd[1469]: time="2026-03-06T01:49:08.186975678Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 6 01:49:08.187075 containerd[1469]: time="2026-03-06T01:49:08.187029960Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 6 01:49:08.187166 systemd[1]: Started containerd.service - containerd container runtime. Mar 6 01:49:08.189078 containerd[1469]: time="2026-03-06T01:49:08.189051765Z" level=info msg="containerd successfully booted in 0.048385s" Mar 6 01:49:08.282473 systemd-networkd[1405]: eth0: Gained IPv6LL Mar 6 01:49:08.285408 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 6 01:49:08.290861 systemd[1]: Reached target network-online.target - Network is Online. Mar 6 01:49:08.304553 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 6 01:49:08.309006 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:49:08.316489 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 6 01:49:08.330695 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 6 01:49:08.336546 systemd[1]: Started sshd@0-10.0.0.139:22-10.0.0.1:46852.service - OpenSSH per-connection server daemon (10.0.0.1:46852). Mar 6 01:49:08.355052 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 6 01:49:08.355683 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 6 01:49:08.360497 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 6 01:49:08.365878 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 6 01:49:08.392837 sshd[1538]: Accepted publickey for core from 10.0.0.1 port 46852 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:49:08.396446 sshd[1538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:49:08.406761 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 6 01:49:08.421769 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 6 01:49:08.430842 systemd-logind[1447]: New session 1 of user core. Mar 6 01:49:08.440438 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 6 01:49:08.458018 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 6 01:49:08.460276 tar[1465]: linux-amd64/README.md Mar 6 01:49:08.475470 (systemd)[1548]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 6 01:49:08.475530 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 6 01:49:08.589065 systemd[1548]: Queued start job for default target default.target. Mar 6 01:49:08.603030 systemd[1548]: Created slice app.slice - User Application Slice. Mar 6 01:49:08.603102 systemd[1548]: Reached target paths.target - Paths. Mar 6 01:49:08.603123 systemd[1548]: Reached target timers.target - Timers. Mar 6 01:49:08.605299 systemd[1548]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 6 01:49:08.621011 systemd[1548]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 6 01:49:08.621198 systemd[1548]: Reached target sockets.target - Sockets. Mar 6 01:49:08.621297 systemd[1548]: Reached target basic.target - Basic System. Mar 6 01:49:08.621343 systemd[1548]: Reached target default.target - Main User Target. Mar 6 01:49:08.621385 systemd[1548]: Startup finished in 133ms. Mar 6 01:49:08.621716 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 6 01:49:08.626891 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 6 01:49:08.692890 systemd[1]: Started sshd@1-10.0.0.139:22-10.0.0.1:46862.service - OpenSSH per-connection server daemon (10.0.0.1:46862). Mar 6 01:49:08.755120 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 46862 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:49:08.757138 sshd[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:49:08.763254 systemd-logind[1447]: New session 2 of user core. Mar 6 01:49:08.770441 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 6 01:49:08.835372 sshd[1562]: pam_unix(sshd:session): session closed for user core Mar 6 01:49:08.851505 systemd[1]: sshd@1-10.0.0.139:22-10.0.0.1:46862.service: Deactivated successfully. Mar 6 01:49:08.854651 systemd[1]: session-2.scope: Deactivated successfully. Mar 6 01:49:08.856970 systemd-logind[1447]: Session 2 logged out. Waiting for processes to exit. Mar 6 01:49:08.866652 systemd[1]: Started sshd@2-10.0.0.139:22-10.0.0.1:46876.service - OpenSSH per-connection server daemon (10.0.0.1:46876). Mar 6 01:49:08.872917 systemd-logind[1447]: Removed session 2. Mar 6 01:49:08.892396 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 46876 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:49:08.893844 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:49:08.898967 systemd-logind[1447]: New session 3 of user core. Mar 6 01:49:08.909367 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 6 01:49:08.970983 sshd[1569]: pam_unix(sshd:session): session closed for user core Mar 6 01:49:08.974730 systemd[1]: sshd@2-10.0.0.139:22-10.0.0.1:46876.service: Deactivated successfully. Mar 6 01:49:08.977201 systemd[1]: session-3.scope: Deactivated successfully. Mar 6 01:49:08.979390 systemd-logind[1447]: Session 3 logged out. Waiting for processes to exit. Mar 6 01:49:08.981407 systemd-logind[1447]: Removed session 3. Mar 6 01:49:09.102016 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:49:09.107511 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 6 01:49:09.107845 (kubelet)[1581]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 01:49:09.111959 systemd[1]: Startup finished in 1.470s (kernel) + 9.228s (initrd) + 5.907s (userspace) = 16.606s. Mar 6 01:49:09.533293 kubelet[1581]: E0306 01:49:09.532145 1581 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 01:49:09.535698 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 01:49:09.535917 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 01:49:18.982838 systemd[1]: Started sshd@3-10.0.0.139:22-10.0.0.1:38838.service - OpenSSH per-connection server daemon (10.0.0.1:38838). Mar 6 01:49:19.015848 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 38838 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:49:19.017723 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:49:19.022544 systemd-logind[1447]: New session 4 of user core. Mar 6 01:49:19.032385 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 6 01:49:19.089311 sshd[1594]: pam_unix(sshd:session): session closed for user core Mar 6 01:49:19.095474 systemd[1]: sshd@3-10.0.0.139:22-10.0.0.1:38838.service: Deactivated successfully. Mar 6 01:49:19.097073 systemd[1]: session-4.scope: Deactivated successfully. Mar 6 01:49:19.098795 systemd-logind[1447]: Session 4 logged out. Waiting for processes to exit. Mar 6 01:49:19.115517 systemd[1]: Started sshd@4-10.0.0.139:22-10.0.0.1:38852.service - OpenSSH per-connection server daemon (10.0.0.1:38852). Mar 6 01:49:19.116707 systemd-logind[1447]: Removed session 4. Mar 6 01:49:19.142102 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 38852 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:49:19.143784 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:49:19.148991 systemd-logind[1447]: New session 5 of user core. Mar 6 01:49:19.158380 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 6 01:49:19.208759 sshd[1601]: pam_unix(sshd:session): session closed for user core Mar 6 01:49:19.227057 systemd[1]: sshd@4-10.0.0.139:22-10.0.0.1:38852.service: Deactivated successfully. Mar 6 01:49:19.229053 systemd[1]: session-5.scope: Deactivated successfully. Mar 6 01:49:19.230891 systemd-logind[1447]: Session 5 logged out. Waiting for processes to exit. Mar 6 01:49:19.232487 systemd[1]: Started sshd@5-10.0.0.139:22-10.0.0.1:38858.service - OpenSSH per-connection server daemon (10.0.0.1:38858). Mar 6 01:49:19.233852 systemd-logind[1447]: Removed session 5. Mar 6 01:49:19.264738 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 38858 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:49:19.266565 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:49:19.272043 systemd-logind[1447]: New session 6 of user core. Mar 6 01:49:19.284444 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 6 01:49:19.341714 sshd[1608]: pam_unix(sshd:session): session closed for user core Mar 6 01:49:19.351069 systemd[1]: sshd@5-10.0.0.139:22-10.0.0.1:38858.service: Deactivated successfully. Mar 6 01:49:19.353029 systemd[1]: session-6.scope: Deactivated successfully. Mar 6 01:49:19.354767 systemd-logind[1447]: Session 6 logged out. Waiting for processes to exit. Mar 6 01:49:19.372557 systemd[1]: Started sshd@6-10.0.0.139:22-10.0.0.1:38864.service - OpenSSH per-connection server daemon (10.0.0.1:38864). Mar 6 01:49:19.373974 systemd-logind[1447]: Removed session 6. Mar 6 01:49:19.398714 sshd[1615]: Accepted publickey for core from 10.0.0.1 port 38864 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:49:19.400092 sshd[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:49:19.405043 systemd-logind[1447]: New session 7 of user core. Mar 6 01:49:19.429416 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 6 01:49:19.493954 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 6 01:49:19.494514 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 01:49:19.512082 sudo[1618]: pam_unix(sudo:session): session closed for user root Mar 6 01:49:19.514310 sshd[1615]: pam_unix(sshd:session): session closed for user core Mar 6 01:49:19.528361 systemd[1]: sshd@6-10.0.0.139:22-10.0.0.1:38864.service: Deactivated successfully. Mar 6 01:49:19.530695 systemd[1]: session-7.scope: Deactivated successfully. Mar 6 01:49:19.532937 systemd-logind[1447]: Session 7 logged out. Waiting for processes to exit. Mar 6 01:49:19.546773 systemd[1]: Started sshd@7-10.0.0.139:22-10.0.0.1:38868.service - OpenSSH per-connection server daemon (10.0.0.1:38868). Mar 6 01:49:19.547930 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 6 01:49:19.549898 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:49:19.550395 systemd-logind[1447]: Removed session 7. Mar 6 01:49:19.575599 sshd[1623]: Accepted publickey for core from 10.0.0.1 port 38868 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:49:19.576460 sshd[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:49:19.582702 systemd-logind[1447]: New session 8 of user core. Mar 6 01:49:19.590465 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 6 01:49:19.648499 sudo[1630]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 6 01:49:19.649062 sudo[1630]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 01:49:19.655181 sudo[1630]: pam_unix(sudo:session): session closed for user root Mar 6 01:49:19.664421 sudo[1629]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 6 01:49:19.664976 sudo[1629]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 01:49:19.695134 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 6 01:49:19.697171 auditctl[1633]: No rules Mar 6 01:49:19.698314 systemd[1]: audit-rules.service: Deactivated successfully. Mar 6 01:49:19.698592 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 6 01:49:19.702003 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 6 01:49:19.716408 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:49:19.722344 (kubelet)[1648]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 01:49:19.738998 augenrules[1663]: No rules Mar 6 01:49:19.740806 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 6 01:49:19.742508 sudo[1629]: pam_unix(sudo:session): session closed for user root Mar 6 01:49:19.745149 sshd[1623]: pam_unix(sshd:session): session closed for user core Mar 6 01:49:19.753045 systemd[1]: sshd@7-10.0.0.139:22-10.0.0.1:38868.service: Deactivated successfully. Mar 6 01:49:19.755167 systemd[1]: session-8.scope: Deactivated successfully. Mar 6 01:49:19.757000 systemd-logind[1447]: Session 8 logged out. Waiting for processes to exit. Mar 6 01:49:19.769548 systemd[1]: Started sshd@8-10.0.0.139:22-10.0.0.1:38872.service - OpenSSH per-connection server daemon (10.0.0.1:38872). Mar 6 01:49:19.771100 systemd-logind[1447]: Removed session 8. Mar 6 01:49:19.777821 kubelet[1648]: E0306 01:49:19.777706 1648 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 01:49:19.784123 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 01:49:19.784507 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 01:49:19.799359 sshd[1672]: Accepted publickey for core from 10.0.0.1 port 38872 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:49:19.801197 sshd[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:49:19.806499 systemd-logind[1447]: New session 9 of user core. Mar 6 01:49:19.821467 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 6 01:49:19.878171 sudo[1676]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 6 01:49:19.878602 sudo[1676]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 01:49:20.208517 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 6 01:49:20.208731 (dockerd)[1694]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 6 01:49:21.307095 dockerd[1694]: time="2026-03-06T01:49:21.306745086Z" level=info msg="Starting up" Mar 6 01:49:21.499700 dockerd[1694]: time="2026-03-06T01:49:21.499583949Z" level=info msg="Loading containers: start." Mar 6 01:49:21.654382 kernel: Initializing XFRM netlink socket Mar 6 01:49:21.783834 systemd-networkd[1405]: docker0: Link UP Mar 6 01:49:21.823472 dockerd[1694]: time="2026-03-06T01:49:21.823310057Z" level=info msg="Loading containers: done." Mar 6 01:49:21.847065 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck154415795-merged.mount: Deactivated successfully. Mar 6 01:49:21.854821 dockerd[1694]: time="2026-03-06T01:49:21.854394619Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 6 01:49:21.854971 dockerd[1694]: time="2026-03-06T01:49:21.854833539Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 6 01:49:21.855108 dockerd[1694]: time="2026-03-06T01:49:21.855035836Z" level=info msg="Daemon has completed initialization" Mar 6 01:49:21.919849 dockerd[1694]: time="2026-03-06T01:49:21.919657471Z" level=info msg="API listen on /run/docker.sock" Mar 6 01:49:21.920023 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 6 01:49:22.455307 containerd[1469]: time="2026-03-06T01:49:22.455161695Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 6 01:49:23.030589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3131645109.mount: Deactivated successfully. Mar 6 01:49:26.228385 containerd[1469]: time="2026-03-06T01:49:26.228057319Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:49:26.229805 containerd[1469]: time="2026-03-06T01:49:26.229719660Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=27074497" Mar 6 01:49:26.230952 containerd[1469]: time="2026-03-06T01:49:26.230857347Z" level=info msg="ImageCreate event name:\"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:49:26.236191 containerd[1469]: time="2026-03-06T01:49:26.236147606Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:49:26.237684 containerd[1469]: time="2026-03-06T01:49:26.237604276Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"27071096\" in 3.782399461s" Mar 6 01:49:26.237735 containerd[1469]: time="2026-03-06T01:49:26.237698642Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\"" Mar 6 01:49:26.239019 containerd[1469]: time="2026-03-06T01:49:26.238975936Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 6 01:49:29.313520 containerd[1469]: time="2026-03-06T01:49:29.313329707Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:49:29.315075 containerd[1469]: time="2026-03-06T01:49:29.314975377Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=21165823" Mar 6 01:49:29.316612 containerd[1469]: time="2026-03-06T01:49:29.316514299Z" level=info msg="ImageCreate event name:\"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:49:29.320384 containerd[1469]: time="2026-03-06T01:49:29.320292343Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:49:29.322098 containerd[1469]: time="2026-03-06T01:49:29.322012848Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"22822771\" in 3.082989222s" Mar 6 01:49:29.322098 containerd[1469]: time="2026-03-06T01:49:29.322085574Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\"" Mar 6 01:49:29.323351 containerd[1469]: time="2026-03-06T01:49:29.323300142Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 6 01:49:30.038792 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 6 01:49:30.059810 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:49:30.413454 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:49:30.430003 (kubelet)[1912]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 01:49:31.044008 kubelet[1912]: E0306 01:49:31.043686 1912 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 01:49:31.048979 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 01:49:31.049284 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 01:49:31.050020 systemd[1]: kubelet.service: Consumed 1.233s CPU time. Mar 6 01:49:31.912350 containerd[1469]: time="2026-03-06T01:49:31.912137988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:49:31.913367 containerd[1469]: time="2026-03-06T01:49:31.913199284Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=15729824" Mar 6 01:49:31.915330 containerd[1469]: time="2026-03-06T01:49:31.915265828Z" level=info msg="ImageCreate event name:\"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:49:31.919187 containerd[1469]: time="2026-03-06T01:49:31.919074976Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:49:31.920724 containerd[1469]: time="2026-03-06T01:49:31.920608714Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"17386790\" in 2.597246667s" Mar 6 01:49:31.920724 containerd[1469]: time="2026-03-06T01:49:31.920700746Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\"" Mar 6 01:49:31.921808 containerd[1469]: time="2026-03-06T01:49:31.921733520Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 6 01:49:34.167177 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3759022566.mount: Deactivated successfully. Mar 6 01:49:35.026782 containerd[1469]: time="2026-03-06T01:49:35.026606396Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:49:35.028079 containerd[1469]: time="2026-03-06T01:49:35.028014436Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=25861770" Mar 6 01:49:35.029037 containerd[1469]: time="2026-03-06T01:49:35.028967906Z" level=info msg="ImageCreate event name:\"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:49:35.034623 containerd[1469]: time="2026-03-06T01:49:35.034521061Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:49:35.035745 containerd[1469]: time="2026-03-06T01:49:35.035561082Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"25860789\" in 3.113761079s" Mar 6 01:49:35.035745 containerd[1469]: time="2026-03-06T01:49:35.035703829Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\"" Mar 6 01:49:35.037063 containerd[1469]: time="2026-03-06T01:49:35.036941549Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 6 01:49:35.536822 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1384308297.mount: Deactivated successfully. Mar 6 01:49:37.592316 containerd[1469]: time="2026-03-06T01:49:37.592084986Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:49:37.593534 containerd[1469]: time="2026-03-06T01:49:37.592950578Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Mar 6 01:49:37.594174 containerd[1469]: time="2026-03-06T01:49:37.594096328Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:49:37.599883 containerd[1469]: time="2026-03-06T01:49:37.599759634Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:49:37.602893 containerd[1469]: time="2026-03-06T01:49:37.602718004Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.565637966s" Mar 6 01:49:37.602893 containerd[1469]: time="2026-03-06T01:49:37.602874977Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Mar 6 01:49:37.604530 containerd[1469]: time="2026-03-06T01:49:37.604127789Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 6 01:49:38.353836 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3598946641.mount: Deactivated successfully. Mar 6 01:49:38.360738 containerd[1469]: time="2026-03-06T01:49:38.360592857Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:49:38.361779 containerd[1469]: time="2026-03-06T01:49:38.361710521Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 6 01:49:38.363493 containerd[1469]: time="2026-03-06T01:49:38.363398678Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:49:38.367792 containerd[1469]: time="2026-03-06T01:49:38.367740222Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:49:38.368691 containerd[1469]: time="2026-03-06T01:49:38.368579428Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 764.420641ms" Mar 6 01:49:38.368691 containerd[1469]: time="2026-03-06T01:49:38.368645692Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 6 01:49:38.372781 containerd[1469]: time="2026-03-06T01:49:38.372722533Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 6 01:49:38.880934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1554826401.mount: Deactivated successfully. Mar 6 01:49:41.219967 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 6 01:49:41.229731 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:49:41.647896 containerd[1469]: time="2026-03-06T01:49:41.647717931Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:49:41.666071 containerd[1469]: time="2026-03-06T01:49:41.652505890Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22860674" Mar 6 01:49:41.666071 containerd[1469]: time="2026-03-06T01:49:41.654349566Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:49:41.666379 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:49:41.670486 containerd[1469]: time="2026-03-06T01:49:41.669946413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:49:41.671929 containerd[1469]: time="2026-03-06T01:49:41.671745112Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 3.298987023s" Mar 6 01:49:41.671929 containerd[1469]: time="2026-03-06T01:49:41.671832292Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Mar 6 01:49:41.698586 (kubelet)[2062]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 01:49:41.832697 kubelet[2062]: E0306 01:49:41.832500 2062 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 01:49:41.839604 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 01:49:41.839877 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 01:49:48.536998 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:49:48.545721 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:49:48.594533 systemd[1]: Reloading requested from client PID 2098 ('systemctl') (unit session-9.scope)... Mar 6 01:49:48.594643 systemd[1]: Reloading... Mar 6 01:49:48.702318 zram_generator::config[2137]: No configuration found. Mar 6 01:49:48.947981 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 6 01:49:49.080645 systemd[1]: Reloading finished in 485 ms. Mar 6 01:49:49.165488 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:49:49.180191 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:49:49.194494 systemd[1]: kubelet.service: Deactivated successfully. Mar 6 01:49:49.195030 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:49:49.219198 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:49:49.567613 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:49:49.596000 (kubelet)[2187]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 6 01:49:49.753677 kubelet[2187]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 6 01:49:49.753677 kubelet[2187]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 6 01:49:49.754147 kubelet[2187]: I0306 01:49:49.753693 2187 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 6 01:49:50.110179 kubelet[2187]: I0306 01:49:50.110116 2187 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 6 01:49:50.110179 kubelet[2187]: I0306 01:49:50.110180 2187 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 6 01:49:50.110772 kubelet[2187]: I0306 01:49:50.110343 2187 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 6 01:49:50.110772 kubelet[2187]: I0306 01:49:50.110363 2187 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 6 01:49:50.110882 kubelet[2187]: I0306 01:49:50.110775 2187 server.go:956] "Client rotation is on, will bootstrap in background" Mar 6 01:49:51.464023 kubelet[2187]: E0306 01:49:51.463635 2187 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.139:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 6 01:49:51.505839 kubelet[2187]: I0306 01:49:51.505735 2187 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 6 01:49:51.532766 kubelet[2187]: E0306 01:49:51.532683 2187 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 6 01:49:51.532907 kubelet[2187]: I0306 01:49:51.532851 2187 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 6 01:49:51.543803 kubelet[2187]: I0306 01:49:51.542885 2187 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 6 01:49:51.545484 kubelet[2187]: I0306 01:49:51.545371 2187 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 6 01:49:51.546481 kubelet[2187]: I0306 01:49:51.545442 2187 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 6 01:49:51.546481 kubelet[2187]: I0306 01:49:51.546468 2187 topology_manager.go:138] "Creating topology manager with none policy" Mar 6 01:49:51.546481 kubelet[2187]: I0306 01:49:51.546485 2187 container_manager_linux.go:306] "Creating device plugin manager" Mar 6 01:49:51.546898 kubelet[2187]: I0306 01:49:51.546824 2187 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 6 01:49:51.549696 kubelet[2187]: I0306 01:49:51.549638 2187 state_mem.go:36] "Initialized new in-memory state store" Mar 6 01:49:51.550260 kubelet[2187]: I0306 01:49:51.550181 2187 kubelet.go:475] "Attempting to sync node with API server" Mar 6 01:49:51.550344 kubelet[2187]: I0306 01:49:51.550309 2187 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 6 01:49:51.550380 kubelet[2187]: I0306 01:49:51.550352 2187 kubelet.go:387] "Adding apiserver pod source" Mar 6 01:49:51.550380 kubelet[2187]: I0306 01:49:51.550374 2187 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 6 01:49:51.551348 kubelet[2187]: E0306 01:49:51.551177 2187 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.139:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 6 01:49:51.551348 kubelet[2187]: E0306 01:49:51.551196 2187 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 6 01:49:51.554706 kubelet[2187]: I0306 01:49:51.554608 2187 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 6 01:49:51.555529 kubelet[2187]: I0306 01:49:51.555426 2187 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 6 01:49:51.555529 kubelet[2187]: I0306 01:49:51.555505 2187 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 6 01:49:51.555668 kubelet[2187]: W0306 01:49:51.555631 2187 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 6 01:49:51.563321 kubelet[2187]: I0306 01:49:51.563167 2187 server.go:1262] "Started kubelet" Mar 6 01:49:51.563865 kubelet[2187]: I0306 01:49:51.563477 2187 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 6 01:49:51.563865 kubelet[2187]: I0306 01:49:51.563628 2187 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 6 01:49:51.566877 kubelet[2187]: I0306 01:49:51.566800 2187 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 6 01:49:51.567006 kubelet[2187]: I0306 01:49:51.566930 2187 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 6 01:49:51.567405 kubelet[2187]: I0306 01:49:51.567378 2187 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 6 01:49:51.569877 kubelet[2187]: I0306 01:49:51.569056 2187 server.go:310] "Adding debug handlers to kubelet server" Mar 6 01:49:51.577792 kubelet[2187]: I0306 01:49:51.576768 2187 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 6 01:49:51.579667 kubelet[2187]: E0306 01:49:51.578928 2187 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 6 01:49:51.585843 kubelet[2187]: I0306 01:49:51.583098 2187 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 6 01:49:51.596825 kubelet[2187]: I0306 01:49:51.583197 2187 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 6 01:49:51.598411 kubelet[2187]: E0306 01:49:51.589054 2187 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 01:49:51.598525 kubelet[2187]: E0306 01:49:51.595165 2187 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 6 01:49:51.598643 kubelet[2187]: E0306 01:49:51.594944 2187 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.139:6443: connect: connection refused" interval="200ms" Mar 6 01:49:51.598691 kubelet[2187]: I0306 01:49:51.597357 2187 reconciler.go:29] "Reconciler: start to sync state" Mar 6 01:49:51.598741 kubelet[2187]: E0306 01:49:51.568832 2187 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.139:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.139:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189a1d74f980ec36 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-06 01:49:51.56309919 +0000 UTC m=+1.941916387,LastTimestamp:2026-03-06 01:49:51.56309919 +0000 UTC m=+1.941916387,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 6 01:49:51.599287 kubelet[2187]: I0306 01:49:51.599161 2187 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 6 01:49:51.603314 kubelet[2187]: I0306 01:49:51.602446 2187 factory.go:223] Registration of the containerd container factory successfully Mar 6 01:49:51.603314 kubelet[2187]: I0306 01:49:51.602471 2187 factory.go:223] Registration of the systemd container factory successfully Mar 6 01:49:51.640583 kubelet[2187]: I0306 01:49:51.640491 2187 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 6 01:49:51.644613 kubelet[2187]: I0306 01:49:51.644480 2187 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 6 01:49:51.644613 kubelet[2187]: I0306 01:49:51.644588 2187 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 6 01:49:51.644736 kubelet[2187]: I0306 01:49:51.644623 2187 kubelet.go:2428] "Starting kubelet main sync loop" Mar 6 01:49:51.644736 kubelet[2187]: E0306 01:49:51.644703 2187 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 6 01:49:51.645355 kubelet[2187]: E0306 01:49:51.645204 2187 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 6 01:49:51.648385 kubelet[2187]: I0306 01:49:51.647900 2187 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 6 01:49:51.648385 kubelet[2187]: I0306 01:49:51.647920 2187 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 6 01:49:51.648385 kubelet[2187]: I0306 01:49:51.647940 2187 state_mem.go:36] "Initialized new in-memory state store" Mar 6 01:49:51.651430 kubelet[2187]: I0306 01:49:51.651365 2187 policy_none.go:49] "None policy: Start" Mar 6 01:49:51.651430 kubelet[2187]: I0306 01:49:51.651429 2187 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 6 01:49:51.651499 kubelet[2187]: I0306 01:49:51.651449 2187 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 6 01:49:51.653824 kubelet[2187]: I0306 01:49:51.653745 2187 policy_none.go:47] "Start" Mar 6 01:49:51.664199 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 6 01:49:51.699071 kubelet[2187]: E0306 01:49:51.698974 2187 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 01:49:51.699512 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 6 01:49:51.707691 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 6 01:49:51.719502 kubelet[2187]: E0306 01:49:51.719413 2187 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 6 01:49:51.719958 kubelet[2187]: I0306 01:49:51.719790 2187 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 6 01:49:51.719958 kubelet[2187]: I0306 01:49:51.719812 2187 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 6 01:49:51.722647 kubelet[2187]: I0306 01:49:51.720192 2187 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 6 01:49:51.722647 kubelet[2187]: E0306 01:49:51.722280 2187 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 6 01:49:51.722647 kubelet[2187]: E0306 01:49:51.722453 2187 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 6 01:49:51.792877 systemd[1]: Created slice kubepods-burstable-podb6e375797fc1d8dbe45a258d14f07136.slice - libcontainer container kubepods-burstable-podb6e375797fc1d8dbe45a258d14f07136.slice. Mar 6 01:49:51.803641 kubelet[2187]: E0306 01:49:51.803457 2187 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.139:6443: connect: connection refused" interval="400ms" Mar 6 01:49:51.827279 kubelet[2187]: I0306 01:49:51.827043 2187 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 6 01:49:51.828075 kubelet[2187]: E0306 01:49:51.828002 2187 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.139:6443/api/v1/nodes\": dial tcp 10.0.0.139:6443: connect: connection refused" node="localhost" Mar 6 01:49:51.828873 kubelet[2187]: E0306 01:49:51.828831 2187 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:49:51.835917 systemd[1]: Created slice kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice - libcontainer container kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice. Mar 6 01:49:51.839568 kubelet[2187]: E0306 01:49:51.839425 2187 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:49:51.847477 systemd[1]: Created slice kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice - libcontainer container kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice. Mar 6 01:49:51.851101 kubelet[2187]: E0306 01:49:51.851009 2187 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:49:51.903997 kubelet[2187]: I0306 01:49:51.903604 2187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b6e375797fc1d8dbe45a258d14f07136-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b6e375797fc1d8dbe45a258d14f07136\") " pod="kube-system/kube-apiserver-localhost" Mar 6 01:49:51.903997 kubelet[2187]: I0306 01:49:51.903756 2187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b6e375797fc1d8dbe45a258d14f07136-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b6e375797fc1d8dbe45a258d14f07136\") " pod="kube-system/kube-apiserver-localhost" Mar 6 01:49:51.903997 kubelet[2187]: I0306 01:49:51.903823 2187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:49:51.903997 kubelet[2187]: I0306 01:49:51.903850 2187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:49:51.903997 kubelet[2187]: I0306 01:49:51.903950 2187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:49:51.906060 kubelet[2187]: I0306 01:49:51.903978 2187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 6 01:49:51.906060 kubelet[2187]: I0306 01:49:51.904001 2187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b6e375797fc1d8dbe45a258d14f07136-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b6e375797fc1d8dbe45a258d14f07136\") " pod="kube-system/kube-apiserver-localhost" Mar 6 01:49:51.906060 kubelet[2187]: I0306 01:49:51.904023 2187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:49:51.906060 kubelet[2187]: I0306 01:49:51.904138 2187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:49:51.995879 kubelet[2187]: E0306 01:49:51.994451 2187 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.139:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.139:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189a1d74f980ec36 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-06 01:49:51.56309919 +0000 UTC m=+1.941916387,LastTimestamp:2026-03-06 01:49:51.56309919 +0000 UTC m=+1.941916387,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 6 01:49:52.031850 kubelet[2187]: I0306 01:49:52.031731 2187 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 6 01:49:52.032403 kubelet[2187]: E0306 01:49:52.032309 2187 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.139:6443/api/v1/nodes\": dial tcp 10.0.0.139:6443: connect: connection refused" node="localhost" Mar 6 01:49:52.140809 kubelet[2187]: E0306 01:49:52.140591 2187 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:49:52.142510 containerd[1469]: time="2026-03-06T01:49:52.142366134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b6e375797fc1d8dbe45a258d14f07136,Namespace:kube-system,Attempt:0,}" Mar 6 01:49:52.143942 kubelet[2187]: E0306 01:49:52.143795 2187 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:49:52.145140 containerd[1469]: time="2026-03-06T01:49:52.144639740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,}" Mar 6 01:49:52.157055 kubelet[2187]: E0306 01:49:52.156838 2187 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:49:52.157607 containerd[1469]: time="2026-03-06T01:49:52.157460004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,}" Mar 6 01:49:52.205612 kubelet[2187]: E0306 01:49:52.205492 2187 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.139:6443: connect: connection refused" interval="800ms" Mar 6 01:49:52.435021 kubelet[2187]: I0306 01:49:52.434770 2187 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 6 01:49:52.435395 kubelet[2187]: E0306 01:49:52.435268 2187 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.139:6443/api/v1/nodes\": dial tcp 10.0.0.139:6443: connect: connection refused" node="localhost" Mar 6 01:49:52.626519 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount309101537.mount: Deactivated successfully. Mar 6 01:49:52.635610 containerd[1469]: time="2026-03-06T01:49:52.635443625Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 01:49:52.640097 containerd[1469]: time="2026-03-06T01:49:52.639938117Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 6 01:49:52.641599 containerd[1469]: time="2026-03-06T01:49:52.641472459Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 01:49:52.643112 containerd[1469]: time="2026-03-06T01:49:52.643058847Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 01:49:52.644606 containerd[1469]: time="2026-03-06T01:49:52.644444516Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 6 01:49:52.646691 containerd[1469]: time="2026-03-06T01:49:52.646597362Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 01:49:52.648328 containerd[1469]: time="2026-03-06T01:49:52.648044842Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 6 01:49:52.650324 containerd[1469]: time="2026-03-06T01:49:52.650165164Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 01:49:52.654109 containerd[1469]: time="2026-03-06T01:49:52.654039157Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 496.393487ms" Mar 6 01:49:52.654914 containerd[1469]: time="2026-03-06T01:49:52.654838309Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 512.250463ms" Mar 6 01:49:52.659519 containerd[1469]: time="2026-03-06T01:49:52.659383553Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 514.574127ms" Mar 6 01:49:52.761065 kubelet[2187]: E0306 01:49:52.760702 2187 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 6 01:49:52.784589 kubelet[2187]: E0306 01:49:52.784466 2187 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 6 01:49:52.806736 kubelet[2187]: E0306 01:49:52.806653 2187 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.139:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 6 01:49:52.808441 kubelet[2187]: E0306 01:49:52.808378 2187 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 6 01:49:53.008290 kubelet[2187]: E0306 01:49:53.007885 2187 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.139:6443: connect: connection refused" interval="1.6s" Mar 6 01:49:53.030334 containerd[1469]: time="2026-03-06T01:49:53.029433362Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:49:53.030334 containerd[1469]: time="2026-03-06T01:49:53.029637802Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:49:53.030334 containerd[1469]: time="2026-03-06T01:49:53.029662858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:49:53.030334 containerd[1469]: time="2026-03-06T01:49:53.029814030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:49:53.039461 containerd[1469]: time="2026-03-06T01:49:53.039000381Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:49:53.039461 containerd[1469]: time="2026-03-06T01:49:53.039078545Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:49:53.039461 containerd[1469]: time="2026-03-06T01:49:53.039093373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:49:53.039461 containerd[1469]: time="2026-03-06T01:49:53.039162742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:49:53.058614 containerd[1469]: time="2026-03-06T01:49:53.058431781Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:49:53.058614 containerd[1469]: time="2026-03-06T01:49:53.058576990Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:49:53.058797 containerd[1469]: time="2026-03-06T01:49:53.058601245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:49:53.058797 containerd[1469]: time="2026-03-06T01:49:53.058727880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:49:53.131606 systemd[1]: Started cri-containerd-9c130543cf0a7e3fb9c40a94299cf7acc06e7c9a7d709e27045aa910b9b1e9ca.scope - libcontainer container 9c130543cf0a7e3fb9c40a94299cf7acc06e7c9a7d709e27045aa910b9b1e9ca. Mar 6 01:49:53.147509 systemd[1]: Started cri-containerd-de49f2855c20c67cfa46e10fccd149667c60ad6532db17131fd4295f982de5e2.scope - libcontainer container de49f2855c20c67cfa46e10fccd149667c60ad6532db17131fd4295f982de5e2. Mar 6 01:49:53.155601 systemd[1]: Started cri-containerd-10fc4395f848c2a193efbcd647a477b747368d104e4bfe1b13175b55168c46a4.scope - libcontainer container 10fc4395f848c2a193efbcd647a477b747368d104e4bfe1b13175b55168c46a4. Mar 6 01:49:53.248058 kubelet[2187]: I0306 01:49:53.247979 2187 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 6 01:49:53.249019 kubelet[2187]: E0306 01:49:53.248447 2187 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.139:6443/api/v1/nodes\": dial tcp 10.0.0.139:6443: connect: connection refused" node="localhost" Mar 6 01:49:53.268069 containerd[1469]: time="2026-03-06T01:49:53.267984482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c130543cf0a7e3fb9c40a94299cf7acc06e7c9a7d709e27045aa910b9b1e9ca\"" Mar 6 01:49:53.270158 kubelet[2187]: E0306 01:49:53.269961 2187 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:49:53.286646 containerd[1469]: time="2026-03-06T01:49:53.285022192Z" level=info msg="CreateContainer within sandbox \"9c130543cf0a7e3fb9c40a94299cf7acc06e7c9a7d709e27045aa910b9b1e9ca\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 6 01:49:53.290891 containerd[1469]: time="2026-03-06T01:49:53.290319359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b6e375797fc1d8dbe45a258d14f07136,Namespace:kube-system,Attempt:0,} returns sandbox id \"de49f2855c20c67cfa46e10fccd149667c60ad6532db17131fd4295f982de5e2\"" Mar 6 01:49:53.293109 kubelet[2187]: E0306 01:49:53.293032 2187 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:49:53.301429 containerd[1469]: time="2026-03-06T01:49:53.301289291Z" level=info msg="CreateContainer within sandbox \"de49f2855c20c67cfa46e10fccd149667c60ad6532db17131fd4295f982de5e2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 6 01:49:53.316074 containerd[1469]: time="2026-03-06T01:49:53.315918736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"10fc4395f848c2a193efbcd647a477b747368d104e4bfe1b13175b55168c46a4\"" Mar 6 01:49:53.317076 kubelet[2187]: E0306 01:49:53.317003 2187 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:49:53.322472 containerd[1469]: time="2026-03-06T01:49:53.322298122Z" level=info msg="CreateContainer within sandbox \"9c130543cf0a7e3fb9c40a94299cf7acc06e7c9a7d709e27045aa910b9b1e9ca\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e4cfc963424ad78d21949a642b6ece39babadbabf167e3a3c0b1c1dfa767ea64\"" Mar 6 01:49:53.323136 containerd[1469]: time="2026-03-06T01:49:53.323053076Z" level=info msg="StartContainer for \"e4cfc963424ad78d21949a642b6ece39babadbabf167e3a3c0b1c1dfa767ea64\"" Mar 6 01:49:53.324589 containerd[1469]: time="2026-03-06T01:49:53.324406582Z" level=info msg="CreateContainer within sandbox \"10fc4395f848c2a193efbcd647a477b747368d104e4bfe1b13175b55168c46a4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 6 01:49:53.343315 containerd[1469]: time="2026-03-06T01:49:53.343280415Z" level=info msg="CreateContainer within sandbox \"de49f2855c20c67cfa46e10fccd149667c60ad6532db17131fd4295f982de5e2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3c7e6ae8a159559aacce7c918879c6aec402c3a54fcc16f4e65818748476ea22\"" Mar 6 01:49:53.344676 containerd[1469]: time="2026-03-06T01:49:53.344657796Z" level=info msg="StartContainer for \"3c7e6ae8a159559aacce7c918879c6aec402c3a54fcc16f4e65818748476ea22\"" Mar 6 01:49:53.352819 containerd[1469]: time="2026-03-06T01:49:53.352650333Z" level=info msg="CreateContainer within sandbox \"10fc4395f848c2a193efbcd647a477b747368d104e4bfe1b13175b55168c46a4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"05e9537d5bbab2abd44fe0316cbbf515415fcac3e0d27491dc7938016bc077c0\"" Mar 6 01:49:53.355315 containerd[1469]: time="2026-03-06T01:49:53.354569060Z" level=info msg="StartContainer for \"05e9537d5bbab2abd44fe0316cbbf515415fcac3e0d27491dc7938016bc077c0\"" Mar 6 01:49:53.392629 systemd[1]: Started cri-containerd-e4cfc963424ad78d21949a642b6ece39babadbabf167e3a3c0b1c1dfa767ea64.scope - libcontainer container e4cfc963424ad78d21949a642b6ece39babadbabf167e3a3c0b1c1dfa767ea64. Mar 6 01:49:53.416578 systemd[1]: Started cri-containerd-3c7e6ae8a159559aacce7c918879c6aec402c3a54fcc16f4e65818748476ea22.scope - libcontainer container 3c7e6ae8a159559aacce7c918879c6aec402c3a54fcc16f4e65818748476ea22. Mar 6 01:49:53.423423 systemd[1]: Started cri-containerd-05e9537d5bbab2abd44fe0316cbbf515415fcac3e0d27491dc7938016bc077c0.scope - libcontainer container 05e9537d5bbab2abd44fe0316cbbf515415fcac3e0d27491dc7938016bc077c0. Mar 6 01:49:53.506010 update_engine[1453]: I20260306 01:49:53.505873 1453 update_attempter.cc:509] Updating boot flags... Mar 6 01:49:53.519781 kubelet[2187]: E0306 01:49:53.519634 2187 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.139:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 6 01:49:53.556410 containerd[1469]: time="2026-03-06T01:49:53.552427025Z" level=info msg="StartContainer for \"05e9537d5bbab2abd44fe0316cbbf515415fcac3e0d27491dc7938016bc077c0\" returns successfully" Mar 6 01:49:53.567323 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2442) Mar 6 01:49:53.586856 containerd[1469]: time="2026-03-06T01:49:53.586697432Z" level=info msg="StartContainer for \"e4cfc963424ad78d21949a642b6ece39babadbabf167e3a3c0b1c1dfa767ea64\" returns successfully" Mar 6 01:49:53.690305 kubelet[2187]: E0306 01:49:53.688967 2187 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:49:53.690769 kubelet[2187]: E0306 01:49:53.690461 2187 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:49:53.696504 containerd[1469]: time="2026-03-06T01:49:53.696017280Z" level=info msg="StartContainer for \"3c7e6ae8a159559aacce7c918879c6aec402c3a54fcc16f4e65818748476ea22\" returns successfully" Mar 6 01:49:53.704994 kubelet[2187]: E0306 01:49:53.704914 2187 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:49:53.705194 kubelet[2187]: E0306 01:49:53.705114 2187 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:49:53.711364 kubelet[2187]: E0306 01:49:53.710653 2187 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:49:53.711364 kubelet[2187]: E0306 01:49:53.710759 2187 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:49:53.740320 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2217) Mar 6 01:49:54.747803 kubelet[2187]: E0306 01:49:54.745646 2187 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:49:54.747803 kubelet[2187]: E0306 01:49:54.745888 2187 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:49:54.747803 kubelet[2187]: E0306 01:49:54.747488 2187 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:49:54.747803 kubelet[2187]: E0306 01:49:54.747660 2187 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:49:54.868448 kubelet[2187]: I0306 01:49:54.867753 2187 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 6 01:49:55.834271 kubelet[2187]: E0306 01:49:55.834098 2187 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:49:55.835720 kubelet[2187]: E0306 01:49:55.834672 2187 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:49:59.059650 kubelet[2187]: E0306 01:49:59.059453 2187 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 6 01:49:59.225573 kubelet[2187]: I0306 01:49:59.225386 2187 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 6 01:49:59.225573 kubelet[2187]: E0306 01:49:59.225534 2187 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 6 01:49:59.318117 kubelet[2187]: I0306 01:49:59.316046 2187 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 6 01:49:59.529280 kubelet[2187]: E0306 01:49:59.529146 2187 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 6 01:49:59.529280 kubelet[2187]: I0306 01:49:59.529291 2187 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 6 01:49:59.532684 kubelet[2187]: E0306 01:49:59.532577 2187 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 6 01:49:59.532684 kubelet[2187]: I0306 01:49:59.532629 2187 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 6 01:49:59.534759 kubelet[2187]: E0306 01:49:59.534661 2187 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 6 01:49:59.565195 kubelet[2187]: I0306 01:49:59.565141 2187 apiserver.go:52] "Watching apiserver" Mar 6 01:49:59.599175 kubelet[2187]: I0306 01:49:59.598983 2187 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 6 01:49:59.887908 kubelet[2187]: I0306 01:49:59.885941 2187 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 6 01:50:00.034190 kubelet[2187]: E0306 01:50:00.033875 2187 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 6 01:50:00.044104 kubelet[2187]: E0306 01:50:00.043987 2187 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:50:01.249533 kubelet[2187]: I0306 01:50:01.249161 2187 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 6 01:50:01.264077 kubelet[2187]: E0306 01:50:01.263955 2187 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:50:01.707064 kubelet[2187]: I0306 01:50:01.706099 2187 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.70606001 podStartE2EDuration="706.06001ms" podCreationTimestamp="2026-03-06 01:50:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 01:50:01.705629755 +0000 UTC m=+12.084446972" watchObservedRunningTime="2026-03-06 01:50:01.70606001 +0000 UTC m=+12.084877217" Mar 6 01:50:02.039052 kubelet[2187]: E0306 01:50:02.038943 2187 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:50:02.590639 systemd[1]: Reloading requested from client PID 2492 ('systemctl') (unit session-9.scope)... Mar 6 01:50:02.590682 systemd[1]: Reloading... Mar 6 01:50:02.816345 zram_generator::config[2534]: No configuration found. Mar 6 01:50:02.965198 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 6 01:50:03.058575 kubelet[2187]: I0306 01:50:03.058323 2187 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 6 01:50:03.256147 kubelet[2187]: E0306 01:50:03.256087 2187 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:50:03.269765 systemd[1]: Reloading finished in 678 ms. Mar 6 01:50:03.339586 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:50:03.353693 systemd[1]: kubelet.service: Deactivated successfully. Mar 6 01:50:03.354003 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:50:03.354078 systemd[1]: kubelet.service: Consumed 6.404s CPU time, 133.3M memory peak, 0B memory swap peak. Mar 6 01:50:03.365847 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:50:03.736868 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:50:03.750791 (kubelet)[2576]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 6 01:50:03.964717 kubelet[2576]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 6 01:50:03.964717 kubelet[2576]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 6 01:50:03.964717 kubelet[2576]: I0306 01:50:03.964763 2576 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 6 01:50:04.015659 kubelet[2576]: I0306 01:50:04.012398 2576 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 6 01:50:04.015659 kubelet[2576]: I0306 01:50:04.013320 2576 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 6 01:50:04.015659 kubelet[2576]: I0306 01:50:04.013437 2576 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 6 01:50:04.015659 kubelet[2576]: I0306 01:50:04.013455 2576 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 6 01:50:04.015659 kubelet[2576]: I0306 01:50:04.014026 2576 server.go:956] "Client rotation is on, will bootstrap in background" Mar 6 01:50:04.021065 kubelet[2576]: I0306 01:50:04.020903 2576 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 6 01:50:04.023515 kubelet[2576]: I0306 01:50:04.023442 2576 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 6 01:50:04.033758 kubelet[2576]: E0306 01:50:04.033705 2576 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 6 01:50:04.033982 kubelet[2576]: I0306 01:50:04.033822 2576 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 6 01:50:04.049584 kubelet[2576]: I0306 01:50:04.049462 2576 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 6 01:50:04.050136 kubelet[2576]: I0306 01:50:04.049905 2576 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 6 01:50:04.050738 kubelet[2576]: I0306 01:50:04.049992 2576 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 6 01:50:04.050738 kubelet[2576]: I0306 01:50:04.050581 2576 topology_manager.go:138] "Creating topology manager with none policy" Mar 6 01:50:04.050738 kubelet[2576]: I0306 01:50:04.050600 2576 container_manager_linux.go:306] "Creating device plugin manager" Mar 6 01:50:04.050738 kubelet[2576]: I0306 01:50:04.050693 2576 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 6 01:50:04.051438 kubelet[2576]: I0306 01:50:04.051370 2576 state_mem.go:36] "Initialized new in-memory state store" Mar 6 01:50:04.052003 kubelet[2576]: I0306 01:50:04.051936 2576 kubelet.go:475] "Attempting to sync node with API server" Mar 6 01:50:04.054361 kubelet[2576]: I0306 01:50:04.052094 2576 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 6 01:50:04.054361 kubelet[2576]: I0306 01:50:04.052131 2576 kubelet.go:387] "Adding apiserver pod source" Mar 6 01:50:04.054361 kubelet[2576]: I0306 01:50:04.052152 2576 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 6 01:50:04.056330 kubelet[2576]: I0306 01:50:04.056278 2576 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 6 01:50:04.057288 kubelet[2576]: I0306 01:50:04.057182 2576 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 6 01:50:04.057475 kubelet[2576]: I0306 01:50:04.057424 2576 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 6 01:50:04.071169 kubelet[2576]: I0306 01:50:04.070942 2576 server.go:1262] "Started kubelet" Mar 6 01:50:04.083131 kubelet[2576]: I0306 01:50:04.083029 2576 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 6 01:50:04.083353 kubelet[2576]: I0306 01:50:04.083147 2576 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 6 01:50:04.083969 kubelet[2576]: I0306 01:50:04.083884 2576 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 6 01:50:04.086296 kubelet[2576]: I0306 01:50:04.084069 2576 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 6 01:50:04.086296 kubelet[2576]: I0306 01:50:04.084332 2576 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 6 01:50:04.089426 kubelet[2576]: I0306 01:50:04.089400 2576 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 6 01:50:04.091410 kubelet[2576]: I0306 01:50:04.090169 2576 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 6 01:50:04.093372 kubelet[2576]: I0306 01:50:04.092133 2576 reconciler.go:29] "Reconciler: start to sync state" Mar 6 01:50:04.099146 kubelet[2576]: I0306 01:50:04.097650 2576 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 6 01:50:04.102051 kubelet[2576]: I0306 01:50:04.100056 2576 server.go:310] "Adding debug handlers to kubelet server" Mar 6 01:50:04.104366 kubelet[2576]: E0306 01:50:04.103745 2576 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 6 01:50:04.106106 kubelet[2576]: I0306 01:50:04.105382 2576 factory.go:223] Registration of the containerd container factory successfully Mar 6 01:50:04.106106 kubelet[2576]: I0306 01:50:04.105406 2576 factory.go:223] Registration of the systemd container factory successfully Mar 6 01:50:04.106106 kubelet[2576]: I0306 01:50:04.105551 2576 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 6 01:50:04.126838 kubelet[2576]: I0306 01:50:04.126525 2576 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 6 01:50:04.130329 kubelet[2576]: I0306 01:50:04.129700 2576 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 6 01:50:04.130329 kubelet[2576]: I0306 01:50:04.129732 2576 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 6 01:50:04.130329 kubelet[2576]: I0306 01:50:04.129756 2576 kubelet.go:2428] "Starting kubelet main sync loop" Mar 6 01:50:04.130329 kubelet[2576]: E0306 01:50:04.129807 2576 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 6 01:50:04.245925 kubelet[2576]: E0306 01:50:04.231803 2576 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 6 01:50:04.314593 kubelet[2576]: I0306 01:50:04.313664 2576 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 6 01:50:04.314593 kubelet[2576]: I0306 01:50:04.313730 2576 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 6 01:50:04.314593 kubelet[2576]: I0306 01:50:04.313755 2576 state_mem.go:36] "Initialized new in-memory state store" Mar 6 01:50:04.315392 kubelet[2576]: I0306 01:50:04.315179 2576 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 6 01:50:04.316275 kubelet[2576]: I0306 01:50:04.315196 2576 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 6 01:50:04.316275 kubelet[2576]: I0306 01:50:04.315465 2576 policy_none.go:49] "None policy: Start" Mar 6 01:50:04.316275 kubelet[2576]: I0306 01:50:04.315535 2576 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 6 01:50:04.316275 kubelet[2576]: I0306 01:50:04.315556 2576 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 6 01:50:04.316275 kubelet[2576]: I0306 01:50:04.315704 2576 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 6 01:50:04.316275 kubelet[2576]: I0306 01:50:04.315718 2576 policy_none.go:47] "Start" Mar 6 01:50:04.324942 kubelet[2576]: E0306 01:50:04.324869 2576 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 6 01:50:04.325198 kubelet[2576]: I0306 01:50:04.325134 2576 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 6 01:50:04.325293 kubelet[2576]: I0306 01:50:04.325190 2576 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 6 01:50:04.326070 kubelet[2576]: I0306 01:50:04.326025 2576 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 6 01:50:04.334850 kubelet[2576]: E0306 01:50:04.333711 2576 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 6 01:50:04.434318 kubelet[2576]: I0306 01:50:04.433369 2576 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 6 01:50:04.434318 kubelet[2576]: I0306 01:50:04.433460 2576 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 6 01:50:04.434318 kubelet[2576]: I0306 01:50:04.433776 2576 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 6 01:50:04.442138 kubelet[2576]: I0306 01:50:04.442087 2576 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 6 01:50:04.448131 kubelet[2576]: E0306 01:50:04.448047 2576 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 6 01:50:04.450153 kubelet[2576]: E0306 01:50:04.449793 2576 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 6 01:50:04.453376 kubelet[2576]: I0306 01:50:04.452352 2576 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 6 01:50:04.453376 kubelet[2576]: I0306 01:50:04.452528 2576 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 6 01:50:04.506693 kubelet[2576]: I0306 01:50:04.506354 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:50:04.506693 kubelet[2576]: I0306 01:50:04.506458 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b6e375797fc1d8dbe45a258d14f07136-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b6e375797fc1d8dbe45a258d14f07136\") " pod="kube-system/kube-apiserver-localhost" Mar 6 01:50:04.506693 kubelet[2576]: I0306 01:50:04.506606 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b6e375797fc1d8dbe45a258d14f07136-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b6e375797fc1d8dbe45a258d14f07136\") " pod="kube-system/kube-apiserver-localhost" Mar 6 01:50:04.506693 kubelet[2576]: I0306 01:50:04.506631 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b6e375797fc1d8dbe45a258d14f07136-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b6e375797fc1d8dbe45a258d14f07136\") " pod="kube-system/kube-apiserver-localhost" Mar 6 01:50:04.506693 kubelet[2576]: I0306 01:50:04.506711 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:50:04.508081 kubelet[2576]: I0306 01:50:04.506734 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:50:04.508081 kubelet[2576]: I0306 01:50:04.506754 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 6 01:50:04.508081 kubelet[2576]: I0306 01:50:04.506946 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:50:04.508081 kubelet[2576]: I0306 01:50:04.507013 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:50:04.749527 kubelet[2576]: E0306 01:50:04.749375 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:50:04.750896 kubelet[2576]: E0306 01:50:04.750839 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:50:04.751138 kubelet[2576]: E0306 01:50:04.751084 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:50:05.056029 kubelet[2576]: I0306 01:50:05.055364 2576 apiserver.go:52] "Watching apiserver" Mar 6 01:50:05.092142 kubelet[2576]: I0306 01:50:05.091937 2576 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 6 01:50:05.302526 kubelet[2576]: I0306 01:50:05.302403 2576 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 6 01:50:05.302721 kubelet[2576]: E0306 01:50:05.302599 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:50:05.302965 kubelet[2576]: I0306 01:50:05.302867 2576 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 6 01:50:05.317379 kubelet[2576]: E0306 01:50:05.316292 2576 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 6 01:50:05.317379 kubelet[2576]: E0306 01:50:05.316584 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:50:05.317663 kubelet[2576]: E0306 01:50:05.317542 2576 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 6 01:50:05.317663 kubelet[2576]: E0306 01:50:05.317653 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:50:05.343127 kubelet[2576]: I0306 01:50:05.343031 2576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.343011367 podStartE2EDuration="2.343011367s" podCreationTimestamp="2026-03-06 01:50:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 01:50:05.342723491 +0000 UTC m=+1.580292387" watchObservedRunningTime="2026-03-06 01:50:05.343011367 +0000 UTC m=+1.580580264" Mar 6 01:50:05.361966 kubelet[2576]: I0306 01:50:05.361893 2576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.361876277 podStartE2EDuration="1.361876277s" podCreationTimestamp="2026-03-06 01:50:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 01:50:05.36175739 +0000 UTC m=+1.599326378" watchObservedRunningTime="2026-03-06 01:50:05.361876277 +0000 UTC m=+1.599445194" Mar 6 01:50:06.419842 kubelet[2576]: E0306 01:50:06.419444 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:50:06.422095 kubelet[2576]: E0306 01:50:06.421194 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:50:07.393740 kubelet[2576]: E0306 01:50:07.393554 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:50:07.425604 kubelet[2576]: I0306 01:50:07.425445 2576 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 6 01:50:07.426273 containerd[1469]: time="2026-03-06T01:50:07.425998301Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 6 01:50:07.426723 kubelet[2576]: I0306 01:50:07.426200 2576 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 6 01:50:07.786548 kubelet[2576]: E0306 01:50:07.786366 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:50:08.115833 kubelet[2576]: E0306 01:50:08.105155 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:50:08.349631 systemd[1]: Created slice kubepods-besteffort-poddcbe35b5_65d5_447d_95a7_46374e57b784.slice - libcontainer container kubepods-besteffort-poddcbe35b5_65d5_447d_95a7_46374e57b784.slice. Mar 6 01:50:08.424631 kubelet[2576]: E0306 01:50:08.422658 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:50:08.424631 kubelet[2576]: E0306 01:50:08.422846 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:50:08.437669 kubelet[2576]: I0306 01:50:08.437628 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dcbe35b5-65d5-447d-95a7-46374e57b784-xtables-lock\") pod \"kube-proxy-t455p\" (UID: \"dcbe35b5-65d5-447d-95a7-46374e57b784\") " pod="kube-system/kube-proxy-t455p" Mar 6 01:50:08.440559 kubelet[2576]: I0306 01:50:08.438343 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x92sk\" (UniqueName: \"kubernetes.io/projected/dcbe35b5-65d5-447d-95a7-46374e57b784-kube-api-access-x92sk\") pod \"kube-proxy-t455p\" (UID: \"dcbe35b5-65d5-447d-95a7-46374e57b784\") " pod="kube-system/kube-proxy-t455p" Mar 6 01:50:08.440559 kubelet[2576]: I0306 01:50:08.438390 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dcbe35b5-65d5-447d-95a7-46374e57b784-kube-proxy\") pod \"kube-proxy-t455p\" (UID: \"dcbe35b5-65d5-447d-95a7-46374e57b784\") " pod="kube-system/kube-proxy-t455p" Mar 6 01:50:08.440559 kubelet[2576]: I0306 01:50:08.438545 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dcbe35b5-65d5-447d-95a7-46374e57b784-lib-modules\") pod \"kube-proxy-t455p\" (UID: \"dcbe35b5-65d5-447d-95a7-46374e57b784\") " pod="kube-system/kube-proxy-t455p" Mar 6 01:50:08.603654 systemd[1]: Created slice kubepods-besteffort-pod19ab7e31_976b_4cfa_be09_3b4133a6c839.slice - libcontainer container kubepods-besteffort-pod19ab7e31_976b_4cfa_be09_3b4133a6c839.slice. Mar 6 01:50:08.640401 kubelet[2576]: I0306 01:50:08.639941 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/19ab7e31-976b-4cfa-be09-3b4133a6c839-var-lib-calico\") pod \"tigera-operator-5588576f44-vhbww\" (UID: \"19ab7e31-976b-4cfa-be09-3b4133a6c839\") " pod="tigera-operator/tigera-operator-5588576f44-vhbww" Mar 6 01:50:08.640401 kubelet[2576]: I0306 01:50:08.640129 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk7fv\" (UniqueName: \"kubernetes.io/projected/19ab7e31-976b-4cfa-be09-3b4133a6c839-kube-api-access-jk7fv\") pod \"tigera-operator-5588576f44-vhbww\" (UID: \"19ab7e31-976b-4cfa-be09-3b4133a6c839\") " pod="tigera-operator/tigera-operator-5588576f44-vhbww" Mar 6 01:50:08.671710 kubelet[2576]: E0306 01:50:08.670901 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:50:08.675441 containerd[1469]: time="2026-03-06T01:50:08.673135148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t455p,Uid:dcbe35b5-65d5-447d-95a7-46374e57b784,Namespace:kube-system,Attempt:0,}" Mar 6 01:50:08.765931 containerd[1469]: time="2026-03-06T01:50:08.764392719Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:50:08.765931 containerd[1469]: time="2026-03-06T01:50:08.765837812Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:50:08.765931 containerd[1469]: time="2026-03-06T01:50:08.765857469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:50:08.766302 containerd[1469]: time="2026-03-06T01:50:08.766016656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:50:08.821575 systemd[1]: Started cri-containerd-2b1c30bbb8c4c200ec6f67fa5aaad693d40b9a5a15fdcd9feae40d8c75188bfa.scope - libcontainer container 2b1c30bbb8c4c200ec6f67fa5aaad693d40b9a5a15fdcd9feae40d8c75188bfa. Mar 6 01:50:08.930066 containerd[1469]: time="2026-03-06T01:50:08.929271070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-vhbww,Uid:19ab7e31-976b-4cfa-be09-3b4133a6c839,Namespace:tigera-operator,Attempt:0,}" Mar 6 01:50:09.440938 containerd[1469]: time="2026-03-06T01:50:09.440744659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t455p,Uid:dcbe35b5-65d5-447d-95a7-46374e57b784,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b1c30bbb8c4c200ec6f67fa5aaad693d40b9a5a15fdcd9feae40d8c75188bfa\"" Mar 6 01:50:09.444317 kubelet[2576]: E0306 01:50:09.444036 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:50:09.457549 containerd[1469]: time="2026-03-06T01:50:09.456744479Z" level=info msg="CreateContainer within sandbox \"2b1c30bbb8c4c200ec6f67fa5aaad693d40b9a5a15fdcd9feae40d8c75188bfa\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 6 01:50:09.490585 containerd[1469]: time="2026-03-06T01:50:09.490469622Z" level=info msg="CreateContainer within sandbox \"2b1c30bbb8c4c200ec6f67fa5aaad693d40b9a5a15fdcd9feae40d8c75188bfa\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2442193ee695cfbf39234d64336e51369a7eff12c500d4d4e8c7715a0edd5ba9\"" Mar 6 01:50:09.492681 containerd[1469]: time="2026-03-06T01:50:09.492602280Z" level=info msg="StartContainer for \"2442193ee695cfbf39234d64336e51369a7eff12c500d4d4e8c7715a0edd5ba9\"" Mar 6 01:50:09.497464 containerd[1469]: time="2026-03-06T01:50:09.495918827Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:50:09.497464 containerd[1469]: time="2026-03-06T01:50:09.495988577Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:50:09.497464 containerd[1469]: time="2026-03-06T01:50:09.495999006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:50:09.497464 containerd[1469]: time="2026-03-06T01:50:09.496141462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:50:09.546621 systemd[1]: Started cri-containerd-c2b2adf69938d60fbd538dea93f158301a2894945a5f4a11e67ae143a2cb4a67.scope - libcontainer container c2b2adf69938d60fbd538dea93f158301a2894945a5f4a11e67ae143a2cb4a67. Mar 6 01:50:09.731405 systemd[1]: Started cri-containerd-2442193ee695cfbf39234d64336e51369a7eff12c500d4d4e8c7715a0edd5ba9.scope - libcontainer container 2442193ee695cfbf39234d64336e51369a7eff12c500d4d4e8c7715a0edd5ba9. Mar 6 01:50:09.804260 containerd[1469]: time="2026-03-06T01:50:09.804124823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-vhbww,Uid:19ab7e31-976b-4cfa-be09-3b4133a6c839,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c2b2adf69938d60fbd538dea93f158301a2894945a5f4a11e67ae143a2cb4a67\"" Mar 6 01:50:09.807549 containerd[1469]: time="2026-03-06T01:50:09.807468816Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 6 01:50:09.826938 containerd[1469]: time="2026-03-06T01:50:09.826835375Z" level=info msg="StartContainer for \"2442193ee695cfbf39234d64336e51369a7eff12c500d4d4e8c7715a0edd5ba9\" returns successfully" Mar 6 01:50:10.638976 kubelet[2576]: E0306 01:50:10.638843 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:50:10.744874 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1164774128.mount: Deactivated successfully. Mar 6 01:50:12.737428 containerd[1469]: time="2026-03-06T01:50:12.737347892Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:50:12.738684 containerd[1469]: time="2026-03-06T01:50:12.738616517Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 6 01:50:12.740137 containerd[1469]: time="2026-03-06T01:50:12.740085667Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:50:12.742588 containerd[1469]: time="2026-03-06T01:50:12.742520246Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:50:12.743441 containerd[1469]: time="2026-03-06T01:50:12.743382965Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.935826986s" Mar 6 01:50:12.743441 containerd[1469]: time="2026-03-06T01:50:12.743437236Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 6 01:50:12.750409 containerd[1469]: time="2026-03-06T01:50:12.750347323Z" level=info msg="CreateContainer within sandbox \"c2b2adf69938d60fbd538dea93f158301a2894945a5f4a11e67ae143a2cb4a67\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 6 01:50:12.767223 containerd[1469]: time="2026-03-06T01:50:12.767143267Z" level=info msg="CreateContainer within sandbox \"c2b2adf69938d60fbd538dea93f158301a2894945a5f4a11e67ae143a2cb4a67\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f07bf06612b03a7a9ef92b7368f4f13147ec35b14159c7cc5c9e83d11bddba61\"" Mar 6 01:50:12.769960 containerd[1469]: time="2026-03-06T01:50:12.768301900Z" level=info msg="StartContainer for \"f07bf06612b03a7a9ef92b7368f4f13147ec35b14159c7cc5c9e83d11bddba61\"" Mar 6 01:50:12.852545 systemd[1]: run-containerd-runc-k8s.io-f07bf06612b03a7a9ef92b7368f4f13147ec35b14159c7cc5c9e83d11bddba61-runc.RMMbr2.mount: Deactivated successfully. Mar 6 01:50:12.862568 systemd[1]: Started cri-containerd-f07bf06612b03a7a9ef92b7368f4f13147ec35b14159c7cc5c9e83d11bddba61.scope - libcontainer container f07bf06612b03a7a9ef92b7368f4f13147ec35b14159c7cc5c9e83d11bddba61. Mar 6 01:50:13.007516 containerd[1469]: time="2026-03-06T01:50:13.002839173Z" level=info msg="StartContainer for \"f07bf06612b03a7a9ef92b7368f4f13147ec35b14159c7cc5c9e83d11bddba61\" returns successfully" Mar 6 01:50:13.662838 kubelet[2576]: I0306 01:50:13.662622 2576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-t455p" podStartSLOduration=5.662604732 podStartE2EDuration="5.662604732s" podCreationTimestamp="2026-03-06 01:50:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 01:50:10.659772787 +0000 UTC m=+6.897341684" watchObservedRunningTime="2026-03-06 01:50:13.662604732 +0000 UTC m=+9.900173629" Mar 6 01:50:19.790067 sudo[1676]: pam_unix(sudo:session): session closed for user root Mar 6 01:50:19.793400 sshd[1672]: pam_unix(sshd:session): session closed for user core Mar 6 01:50:19.805062 systemd[1]: sshd@8-10.0.0.139:22-10.0.0.1:38872.service: Deactivated successfully. Mar 6 01:50:19.810184 systemd[1]: session-9.scope: Deactivated successfully. Mar 6 01:50:19.812752 systemd[1]: session-9.scope: Consumed 11.528s CPU time, 160.4M memory peak, 0B memory swap peak. Mar 6 01:50:19.814094 systemd-logind[1447]: Session 9 logged out. Waiting for processes to exit. Mar 6 01:50:19.818108 systemd-logind[1447]: Removed session 9. Mar 6 01:50:22.712427 kubelet[2576]: I0306 01:50:22.712294 2576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-vhbww" podStartSLOduration=11.774263541 podStartE2EDuration="14.712277905s" podCreationTimestamp="2026-03-06 01:50:08 +0000 UTC" firstStartedPulling="2026-03-06 01:50:09.806415853 +0000 UTC m=+6.043984761" lastFinishedPulling="2026-03-06 01:50:12.744430227 +0000 UTC m=+8.981999125" observedRunningTime="2026-03-06 01:50:13.662834991 +0000 UTC m=+9.900403888" watchObservedRunningTime="2026-03-06 01:50:22.712277905 +0000 UTC m=+18.949846812" Mar 6 01:50:22.737339 systemd[1]: Created slice kubepods-besteffort-podc12d3a67_5fc3_428f_9f75_c240129f657f.slice - libcontainer container kubepods-besteffort-podc12d3a67_5fc3_428f_9f75_c240129f657f.slice. Mar 6 01:50:22.755669 kubelet[2576]: I0306 01:50:22.755575 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c12d3a67-5fc3-428f-9f75-c240129f657f-tigera-ca-bundle\") pod \"calico-typha-b999f649d-hf7kq\" (UID: \"c12d3a67-5fc3-428f-9f75-c240129f657f\") " pod="calico-system/calico-typha-b999f649d-hf7kq" Mar 6 01:50:22.755669 kubelet[2576]: I0306 01:50:22.755639 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c12d3a67-5fc3-428f-9f75-c240129f657f-typha-certs\") pod \"calico-typha-b999f649d-hf7kq\" (UID: \"c12d3a67-5fc3-428f-9f75-c240129f657f\") " pod="calico-system/calico-typha-b999f649d-hf7kq" Mar 6 01:50:22.755669 kubelet[2576]: I0306 01:50:22.755657 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dbcf\" (UniqueName: \"kubernetes.io/projected/c12d3a67-5fc3-428f-9f75-c240129f657f-kube-api-access-4dbcf\") pod \"calico-typha-b999f649d-hf7kq\" (UID: \"c12d3a67-5fc3-428f-9f75-c240129f657f\") " pod="calico-system/calico-typha-b999f649d-hf7kq" Mar 6 01:50:22.804422 systemd[1]: Created slice kubepods-besteffort-pod3bae5647_13e2_41e8_bb16_7250bd2f6297.slice - libcontainer container kubepods-besteffort-pod3bae5647_13e2_41e8_bb16_7250bd2f6297.slice. Mar 6 01:50:22.856019 kubelet[2576]: I0306 01:50:22.855940 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3bae5647-13e2-41e8-bb16-7250bd2f6297-xtables-lock\") pod \"calico-node-z4w7n\" (UID: \"3bae5647-13e2-41e8-bb16-7250bd2f6297\") " pod="calico-system/calico-node-z4w7n" Mar 6 01:50:22.856019 kubelet[2576]: I0306 01:50:22.856025 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3bae5647-13e2-41e8-bb16-7250bd2f6297-cni-log-dir\") pod \"calico-node-z4w7n\" (UID: \"3bae5647-13e2-41e8-bb16-7250bd2f6297\") " pod="calico-system/calico-node-z4w7n" Mar 6 01:50:22.856309 kubelet[2576]: I0306 01:50:22.856043 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3bae5647-13e2-41e8-bb16-7250bd2f6297-lib-modules\") pod \"calico-node-z4w7n\" (UID: \"3bae5647-13e2-41e8-bb16-7250bd2f6297\") " pod="calico-system/calico-node-z4w7n" Mar 6 01:50:22.856309 kubelet[2576]: I0306 01:50:22.856056 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3bae5647-13e2-41e8-bb16-7250bd2f6297-node-certs\") pod \"calico-node-z4w7n\" (UID: \"3bae5647-13e2-41e8-bb16-7250bd2f6297\") " pod="calico-system/calico-node-z4w7n" Mar 6 01:50:22.856309 kubelet[2576]: I0306 01:50:22.856068 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/3bae5647-13e2-41e8-bb16-7250bd2f6297-nodeproc\") pod \"calico-node-z4w7n\" (UID: \"3bae5647-13e2-41e8-bb16-7250bd2f6297\") " pod="calico-system/calico-node-z4w7n" Mar 6 01:50:22.856309 kubelet[2576]: I0306 01:50:22.856095 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3bae5647-13e2-41e8-bb16-7250bd2f6297-cni-net-dir\") pod \"calico-node-z4w7n\" (UID: \"3bae5647-13e2-41e8-bb16-7250bd2f6297\") " pod="calico-system/calico-node-z4w7n" Mar 6 01:50:22.856309 kubelet[2576]: I0306 01:50:22.856122 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3bae5647-13e2-41e8-bb16-7250bd2f6297-cni-bin-dir\") pod \"calico-node-z4w7n\" (UID: \"3bae5647-13e2-41e8-bb16-7250bd2f6297\") " pod="calico-system/calico-node-z4w7n" Mar 6 01:50:22.856470 kubelet[2576]: I0306 01:50:22.856158 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3bae5647-13e2-41e8-bb16-7250bd2f6297-flexvol-driver-host\") pod \"calico-node-z4w7n\" (UID: \"3bae5647-13e2-41e8-bb16-7250bd2f6297\") " pod="calico-system/calico-node-z4w7n" Mar 6 01:50:22.856470 kubelet[2576]: I0306 01:50:22.856270 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3bae5647-13e2-41e8-bb16-7250bd2f6297-tigera-ca-bundle\") pod \"calico-node-z4w7n\" (UID: \"3bae5647-13e2-41e8-bb16-7250bd2f6297\") " pod="calico-system/calico-node-z4w7n" Mar 6 01:50:22.856470 kubelet[2576]: I0306 01:50:22.856285 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3bae5647-13e2-41e8-bb16-7250bd2f6297-var-lib-calico\") pod \"calico-node-z4w7n\" (UID: \"3bae5647-13e2-41e8-bb16-7250bd2f6297\") " pod="calico-system/calico-node-z4w7n" Mar 6 01:50:22.856470 kubelet[2576]: I0306 01:50:22.856298 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/3bae5647-13e2-41e8-bb16-7250bd2f6297-bpffs\") pod \"calico-node-z4w7n\" (UID: \"3bae5647-13e2-41e8-bb16-7250bd2f6297\") " pod="calico-system/calico-node-z4w7n" Mar 6 01:50:22.856470 kubelet[2576]: I0306 01:50:22.856310 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3bae5647-13e2-41e8-bb16-7250bd2f6297-policysync\") pod \"calico-node-z4w7n\" (UID: \"3bae5647-13e2-41e8-bb16-7250bd2f6297\") " pod="calico-system/calico-node-z4w7n" Mar 6 01:50:22.856680 kubelet[2576]: I0306 01:50:22.856324 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvnxh\" (UniqueName: \"kubernetes.io/projected/3bae5647-13e2-41e8-bb16-7250bd2f6297-kube-api-access-bvnxh\") pod \"calico-node-z4w7n\" (UID: \"3bae5647-13e2-41e8-bb16-7250bd2f6297\") " pod="calico-system/calico-node-z4w7n" Mar 6 01:50:22.856680 kubelet[2576]: I0306 01:50:22.856339 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/3bae5647-13e2-41e8-bb16-7250bd2f6297-sys-fs\") pod \"calico-node-z4w7n\" (UID: \"3bae5647-13e2-41e8-bb16-7250bd2f6297\") " pod="calico-system/calico-node-z4w7n" Mar 6 01:50:22.856680 kubelet[2576]: I0306 01:50:22.856380 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3bae5647-13e2-41e8-bb16-7250bd2f6297-var-run-calico\") pod \"calico-node-z4w7n\" (UID: \"3bae5647-13e2-41e8-bb16-7250bd2f6297\") " pod="calico-system/calico-node-z4w7n" Mar 6 01:50:22.917367 kubelet[2576]: E0306 01:50:22.916996 2576 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xv9jc" podUID="20ed6bd5-f019-43c9-934e-78717b2dba0c" Mar 6 01:50:22.958415 kubelet[2576]: I0306 01:50:22.957191 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/20ed6bd5-f019-43c9-934e-78717b2dba0c-kubelet-dir\") pod \"csi-node-driver-xv9jc\" (UID: \"20ed6bd5-f019-43c9-934e-78717b2dba0c\") " pod="calico-system/csi-node-driver-xv9jc" Mar 6 01:50:22.958415 kubelet[2576]: I0306 01:50:22.957279 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/20ed6bd5-f019-43c9-934e-78717b2dba0c-registration-dir\") pod \"csi-node-driver-xv9jc\" (UID: \"20ed6bd5-f019-43c9-934e-78717b2dba0c\") " pod="calico-system/csi-node-driver-xv9jc" Mar 6 01:50:22.958415 kubelet[2576]: I0306 01:50:22.957319 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/20ed6bd5-f019-43c9-934e-78717b2dba0c-varrun\") pod \"csi-node-driver-xv9jc\" (UID: \"20ed6bd5-f019-43c9-934e-78717b2dba0c\") " pod="calico-system/csi-node-driver-xv9jc" Mar 6 01:50:22.958415 kubelet[2576]: I0306 01:50:22.957343 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/20ed6bd5-f019-43c9-934e-78717b2dba0c-socket-dir\") pod \"csi-node-driver-xv9jc\" (UID: \"20ed6bd5-f019-43c9-934e-78717b2dba0c\") " pod="calico-system/csi-node-driver-xv9jc" Mar 6 01:50:22.958415 kubelet[2576]: I0306 01:50:22.957367 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ptbw\" (UniqueName: \"kubernetes.io/projected/20ed6bd5-f019-43c9-934e-78717b2dba0c-kube-api-access-7ptbw\") pod \"csi-node-driver-xv9jc\" (UID: \"20ed6bd5-f019-43c9-934e-78717b2dba0c\") " pod="calico-system/csi-node-driver-xv9jc" Mar 6 01:50:22.967096 kubelet[2576]: E0306 01:50:22.966974 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:50:22.974324 kubelet[2576]: W0306 01:50:22.974289 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:50:22.975749 kubelet[2576]: E0306 01:50:22.974728 2576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:50:22.976660 kubelet[2576]: E0306 01:50:22.976644 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:50:22.976752 kubelet[2576]: W0306 01:50:22.976735 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:50:22.976807 kubelet[2576]: E0306 01:50:22.976795 2576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:50:22.993967 kubelet[2576]: E0306 01:50:22.993937 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:50:22.994156 kubelet[2576]: W0306 01:50:22.994088 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:50:22.994156 kubelet[2576]: E0306 01:50:22.994118 2576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:50:23.046792 kubelet[2576]: E0306 01:50:23.046688 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:50:23.047659 containerd[1469]: time="2026-03-06T01:50:23.047628582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b999f649d-hf7kq,Uid:c12d3a67-5fc3-428f-9f75-c240129f657f,Namespace:calico-system,Attempt:0,}" Mar 6 01:50:23.058825 kubelet[2576]: E0306 01:50:23.058791 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:50:23.059357 kubelet[2576]: W0306 01:50:23.059037 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:50:23.059357 kubelet[2576]: E0306 01:50:23.059071 2576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:50:23.060113 kubelet[2576]: E0306 01:50:23.060059 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:50:23.060113 kubelet[2576]: W0306 01:50:23.060075 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:50:23.060113 kubelet[2576]: E0306 01:50:23.060090 2576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:50:23.061788 kubelet[2576]: E0306 01:50:23.061564 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:50:23.061788 kubelet[2576]: W0306 01:50:23.061609 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:50:23.061788 kubelet[2576]: E0306 01:50:23.061626 2576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:50:23.066322 kubelet[2576]: E0306 01:50:23.063023 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:50:23.066322 kubelet[2576]: W0306 01:50:23.063038 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:50:23.066322 kubelet[2576]: E0306 01:50:23.063050 2576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:50:23.066322 kubelet[2576]: E0306 01:50:23.063541 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:50:23.066322 kubelet[2576]: W0306 01:50:23.063555 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:50:23.066322 kubelet[2576]: E0306 01:50:23.063570 2576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:50:23.066322 kubelet[2576]: E0306 01:50:23.064141 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:50:23.066322 kubelet[2576]: W0306 01:50:23.064156 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:50:23.066322 kubelet[2576]: E0306 01:50:23.064172 2576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:50:23.066793 kubelet[2576]: E0306 01:50:23.066778 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:50:23.066793 kubelet[2576]: W0306 01:50:23.066792 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:50:23.066864 kubelet[2576]: E0306 01:50:23.066803 2576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:50:23.067460 kubelet[2576]: E0306 01:50:23.067383 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:50:23.067460 kubelet[2576]: W0306 01:50:23.067393 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:50:23.067460 kubelet[2576]: E0306 01:50:23.067403 2576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:50:23.069525 kubelet[2576]: E0306 01:50:23.069455 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:50:23.069586 kubelet[2576]: W0306 01:50:23.069530 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:50:23.069586 kubelet[2576]: E0306 01:50:23.069546 2576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:50:23.070010 kubelet[2576]: E0306 01:50:23.069949 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:50:23.070010 kubelet[2576]: W0306 01:50:23.069988 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:50:23.070010 kubelet[2576]: E0306 01:50:23.069998 2576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:50:23.070779 kubelet[2576]: E0306 01:50:23.070721 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:50:23.070779 kubelet[2576]: W0306 01:50:23.070737 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:50:23.070779 kubelet[2576]: E0306 01:50:23.070750 2576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:50:23.071368 kubelet[2576]: E0306 01:50:23.071334 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:50:23.071368 kubelet[2576]: W0306 01:50:23.071368 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:50:23.071452 kubelet[2576]: E0306 01:50:23.071377 2576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:50:23.071838 kubelet[2576]: E0306 01:50:23.071800 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:50:23.071838 kubelet[2576]: W0306 01:50:23.071834 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:50:23.071916 kubelet[2576]: E0306 01:50:23.071845 2576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:50:23.072328 kubelet[2576]: E0306 01:50:23.072184 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:50:23.072431 kubelet[2576]: W0306 01:50:23.072354 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:50:23.072431 kubelet[2576]: E0306 01:50:23.072365 2576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:50:23.072832 kubelet[2576]: E0306 01:50:23.072775 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:50:23.072832 kubelet[2576]: W0306 01:50:23.072818 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:50:23.072832 kubelet[2576]: E0306 01:50:23.072833 2576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:50:23.073371 kubelet[2576]: E0306 01:50:23.073327 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:50:23.073371 kubelet[2576]: W0306 01:50:23.073361 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:50:23.073371 kubelet[2576]: E0306 01:50:23.073373 2576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:50:23.073720 kubelet[2576]: E0306 01:50:23.073690 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:50:23.073720 kubelet[2576]: W0306 01:50:23.073706 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:50:23.073720 kubelet[2576]: E0306 01:50:23.073717 2576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:50:23.074281 kubelet[2576]: E0306 01:50:23.074085 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:50:23.074281 kubelet[2576]: W0306 01:50:23.074122 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:50:23.074281 kubelet[2576]: E0306 01:50:23.074134 2576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:50:23.074618 kubelet[2576]: E0306 01:50:23.074595 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:50:23.074618 kubelet[2576]: W0306 01:50:23.074609 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:50:23.074909 kubelet[2576]: E0306 01:50:23.074621 2576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:50:23.075003 kubelet[2576]: E0306 01:50:23.074938 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:50:23.075003 kubelet[2576]: W0306 01:50:23.074948 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:50:23.075003 kubelet[2576]: E0306 01:50:23.074959 2576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:50:23.075557 kubelet[2576]: E0306 01:50:23.075386 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:50:23.075557 kubelet[2576]: W0306 01:50:23.075420 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:50:23.075557 kubelet[2576]: E0306 01:50:23.075432 2576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:50:23.076127 kubelet[2576]: E0306 01:50:23.075922 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:50:23.076127 kubelet[2576]: W0306 01:50:23.075967 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:50:23.076127 kubelet[2576]: E0306 01:50:23.075979 2576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:50:23.076742 kubelet[2576]: E0306 01:50:23.076561 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:50:23.076742 kubelet[2576]: W0306 01:50:23.076572 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:50:23.076742 kubelet[2576]: E0306 01:50:23.076584 2576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:50:23.077191 kubelet[2576]: E0306 01:50:23.076909 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:50:23.077191 kubelet[2576]: W0306 01:50:23.076922 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:50:23.077191 kubelet[2576]: E0306 01:50:23.076934 2576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:50:23.077379 kubelet[2576]: E0306 01:50:23.077369 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:50:23.077426 kubelet[2576]: W0306 01:50:23.077379 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:50:23.077426 kubelet[2576]: E0306 01:50:23.077390 2576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:50:23.092377 kubelet[2576]: E0306 01:50:23.092307 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:50:23.092459 kubelet[2576]: W0306 01:50:23.092434 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:50:23.092459 kubelet[2576]: E0306 01:50:23.092453 2576 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:50:23.100780 containerd[1469]: time="2026-03-06T01:50:23.100605530Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:50:23.100780 containerd[1469]: time="2026-03-06T01:50:23.100699666Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:50:23.100780 containerd[1469]: time="2026-03-06T01:50:23.100718251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:50:23.101077 containerd[1469]: time="2026-03-06T01:50:23.100861698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:50:23.117160 containerd[1469]: time="2026-03-06T01:50:23.117082014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z4w7n,Uid:3bae5647-13e2-41e8-bb16-7250bd2f6297,Namespace:calico-system,Attempt:0,}" Mar 6 01:50:23.136589 systemd[1]: Started cri-containerd-93895b8712b0327bbb1b6161d6d587d580c7b8b289250f8819599636d6532737.scope - libcontainer container 93895b8712b0327bbb1b6161d6d587d580c7b8b289250f8819599636d6532737. Mar 6 01:50:23.193703 containerd[1469]: time="2026-03-06T01:50:23.190131195Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:50:23.193703 containerd[1469]: time="2026-03-06T01:50:23.190416016Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:50:23.193703 containerd[1469]: time="2026-03-06T01:50:23.190515582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:50:23.193703 containerd[1469]: time="2026-03-06T01:50:23.192534779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:50:23.222728 containerd[1469]: time="2026-03-06T01:50:23.222034360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b999f649d-hf7kq,Uid:c12d3a67-5fc3-428f-9f75-c240129f657f,Namespace:calico-system,Attempt:0,} returns sandbox id \"93895b8712b0327bbb1b6161d6d587d580c7b8b289250f8819599636d6532737\"" Mar 6 01:50:23.224422 kubelet[2576]: E0306 01:50:23.224381 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:50:23.228774 systemd[1]: Started cri-containerd-47c562f6e94c65113e1704667b49fb0be4903d0bf0feaeb9675c5e00b0741a6a.scope - libcontainer container 47c562f6e94c65113e1704667b49fb0be4903d0bf0feaeb9675c5e00b0741a6a. Mar 6 01:50:23.229113 containerd[1469]: time="2026-03-06T01:50:23.228874813Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 6 01:50:23.277510 containerd[1469]: time="2026-03-06T01:50:23.277339818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z4w7n,Uid:3bae5647-13e2-41e8-bb16-7250bd2f6297,Namespace:calico-system,Attempt:0,} returns sandbox id \"47c562f6e94c65113e1704667b49fb0be4903d0bf0feaeb9675c5e00b0741a6a\"" Mar 6 01:50:24.814853 containerd[1469]: time="2026-03-06T01:50:24.814740395Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:50:24.815936 containerd[1469]: time="2026-03-06T01:50:24.815851429Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Mar 6 01:50:24.818128 containerd[1469]: time="2026-03-06T01:50:24.818030975Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:50:24.822355 containerd[1469]: time="2026-03-06T01:50:24.822274373Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:50:24.823390 containerd[1469]: time="2026-03-06T01:50:24.823315324Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 1.594327982s" Mar 6 01:50:24.823390 containerd[1469]: time="2026-03-06T01:50:24.823365368Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 6 01:50:24.824802 containerd[1469]: time="2026-03-06T01:50:24.824766893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 6 01:50:24.842367 containerd[1469]: time="2026-03-06T01:50:24.842295552Z" level=info msg="CreateContainer within sandbox \"93895b8712b0327bbb1b6161d6d587d580c7b8b289250f8819599636d6532737\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 6 01:50:24.862575 containerd[1469]: time="2026-03-06T01:50:24.862467737Z" level=info msg="CreateContainer within sandbox \"93895b8712b0327bbb1b6161d6d587d580c7b8b289250f8819599636d6532737\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1cb43b0c3fca55d651a59d852e00933ecd7d1cd7bebbdc6648f9058a479afbf6\"" Mar 6 01:50:24.863450 containerd[1469]: time="2026-03-06T01:50:24.863341017Z" level=info msg="StartContainer for \"1cb43b0c3fca55d651a59d852e00933ecd7d1cd7bebbdc6648f9058a479afbf6\"" Mar 6 01:50:24.939517 systemd[1]: Started cri-containerd-1cb43b0c3fca55d651a59d852e00933ecd7d1cd7bebbdc6648f9058a479afbf6.scope - libcontainer container 1cb43b0c3fca55d651a59d852e00933ecd7d1cd7bebbdc6648f9058a479afbf6. Mar 6 01:50:25.021124 containerd[1469]: time="2026-03-06T01:50:25.020733036Z" level=info msg="StartContainer for \"1cb43b0c3fca55d651a59d852e00933ecd7d1cd7bebbdc6648f9058a479afbf6\" returns successfully" Mar 6 01:50:25.132747 kubelet[2576]: E0306 01:50:25.130591 2576 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xv9jc" podUID="20ed6bd5-f019-43c9-934e-78717b2dba0c" Mar 6 01:50:25.503430 containerd[1469]: time="2026-03-06T01:50:25.503299717Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:50:25.504857 containerd[1469]: time="2026-03-06T01:50:25.504695509Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Mar 6 01:50:25.506472 containerd[1469]: time="2026-03-06T01:50:25.506297031Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:50:25.509682 containerd[1469]: time="2026-03-06T01:50:25.509612519Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:50:25.511026 containerd[1469]: time="2026-03-06T01:50:25.510961905Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 686.158263ms" Mar 6 01:50:25.511091 containerd[1469]: time="2026-03-06T01:50:25.511036133Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 6 01:50:25.517566 containerd[1469]: time="2026-03-06T01:50:25.517451038Z" level=info msg="CreateContainer within sandbox \"47c562f6e94c65113e1704667b49fb0be4903d0bf0feaeb9675c5e00b0741a6a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 6 01:50:25.540824 containerd[1469]: time="2026-03-06T01:50:25.540683259Z" level=info msg="CreateContainer within sandbox \"47c562f6e94c65113e1704667b49fb0be4903d0bf0feaeb9675c5e00b0741a6a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c3bd491a1eaf4b2c59d0bc5b4eb8910ae5e4481c4e4fb0dba0b06a735185b037\"" Mar 6 01:50:25.542344 containerd[1469]: time="2026-03-06T01:50:25.541563651Z" level=info msg="StartContainer for \"c3bd491a1eaf4b2c59d0bc5b4eb8910ae5e4481c4e4fb0dba0b06a735185b037\"" Mar 6 01:50:25.621510 systemd[1]: Started cri-containerd-c3bd491a1eaf4b2c59d0bc5b4eb8910ae5e4481c4e4fb0dba0b06a735185b037.scope - libcontainer container c3bd491a1eaf4b2c59d0bc5b4eb8910ae5e4481c4e4fb0dba0b06a735185b037. Mar 6 01:50:25.682059 containerd[1469]: time="2026-03-06T01:50:25.681934330Z" level=info msg="StartContainer for \"c3bd491a1eaf4b2c59d0bc5b4eb8910ae5e4481c4e4fb0dba0b06a735185b037\" returns successfully" Mar 6 01:50:25.710910 systemd[1]: cri-containerd-c3bd491a1eaf4b2c59d0bc5b4eb8910ae5e4481c4e4fb0dba0b06a735185b037.scope: Deactivated successfully. Mar 6 01:50:25.717701 kubelet[2576]: E0306 01:50:25.717662 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:50:25.854756 containerd[1469]: time="2026-03-06T01:50:25.852641684Z" level=info msg="shim disconnected" id=c3bd491a1eaf4b2c59d0bc5b4eb8910ae5e4481c4e4fb0dba0b06a735185b037 namespace=k8s.io Mar 6 01:50:25.854756 containerd[1469]: time="2026-03-06T01:50:25.852748523Z" level=warning msg="cleaning up after shim disconnected" id=c3bd491a1eaf4b2c59d0bc5b4eb8910ae5e4481c4e4fb0dba0b06a735185b037 namespace=k8s.io Mar 6 01:50:25.854756 containerd[1469]: time="2026-03-06T01:50:25.852761167Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 6 01:50:25.893799 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c3bd491a1eaf4b2c59d0bc5b4eb8910ae5e4481c4e4fb0dba0b06a735185b037-rootfs.mount: Deactivated successfully. Mar 6 01:50:26.722345 kubelet[2576]: E0306 01:50:26.722191 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:50:26.724346 containerd[1469]: time="2026-03-06T01:50:26.724189096Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 6 01:50:26.745899 kubelet[2576]: I0306 01:50:26.745789 2576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-b999f649d-hf7kq" podStartSLOduration=3.147931945 podStartE2EDuration="4.745775447s" podCreationTimestamp="2026-03-06 01:50:22 +0000 UTC" firstStartedPulling="2026-03-06 01:50:23.22676205 +0000 UTC m=+19.464330947" lastFinishedPulling="2026-03-06 01:50:24.824605552 +0000 UTC m=+21.062174449" observedRunningTime="2026-03-06 01:50:25.770637453 +0000 UTC m=+22.008206410" watchObservedRunningTime="2026-03-06 01:50:26.745775447 +0000 UTC m=+22.983344344" Mar 6 01:50:27.132575 kubelet[2576]: E0306 01:50:27.131306 2576 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xv9jc" podUID="20ed6bd5-f019-43c9-934e-78717b2dba0c" Mar 6 01:50:27.724886 kubelet[2576]: E0306 01:50:27.724773 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:50:29.130830 kubelet[2576]: E0306 01:50:29.130749 2576 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xv9jc" podUID="20ed6bd5-f019-43c9-934e-78717b2dba0c" Mar 6 01:50:31.131429 kubelet[2576]: E0306 01:50:31.131163 2576 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xv9jc" podUID="20ed6bd5-f019-43c9-934e-78717b2dba0c" Mar 6 01:50:33.131303 kubelet[2576]: E0306 01:50:33.130890 2576 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xv9jc" podUID="20ed6bd5-f019-43c9-934e-78717b2dba0c" Mar 6 01:50:33.660983 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2798563964.mount: Deactivated successfully. Mar 6 01:50:34.031752 containerd[1469]: time="2026-03-06T01:50:34.031317175Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 6 01:50:34.038694 containerd[1469]: time="2026-03-06T01:50:34.038638628Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 7.31431979s" Mar 6 01:50:34.038694 containerd[1469]: time="2026-03-06T01:50:34.038692078Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 6 01:50:34.046798 containerd[1469]: time="2026-03-06T01:50:34.046690031Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:50:34.046951 containerd[1469]: time="2026-03-06T01:50:34.046822362Z" level=info msg="CreateContainer within sandbox \"47c562f6e94c65113e1704667b49fb0be4903d0bf0feaeb9675c5e00b0741a6a\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 6 01:50:34.047990 containerd[1469]: time="2026-03-06T01:50:34.047854854Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:50:34.048727 containerd[1469]: time="2026-03-06T01:50:34.048673311Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:50:34.086275 containerd[1469]: time="2026-03-06T01:50:34.086109807Z" level=info msg="CreateContainer within sandbox \"47c562f6e94c65113e1704667b49fb0be4903d0bf0feaeb9675c5e00b0741a6a\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"3c4e9abe538b6c834fdc69445c504896b119ed411442d8b9015bffecbaf72385\"" Mar 6 01:50:34.086941 containerd[1469]: time="2026-03-06T01:50:34.086896806Z" level=info msg="StartContainer for \"3c4e9abe538b6c834fdc69445c504896b119ed411442d8b9015bffecbaf72385\"" Mar 6 01:50:34.191877 systemd[1]: Started cri-containerd-3c4e9abe538b6c834fdc69445c504896b119ed411442d8b9015bffecbaf72385.scope - libcontainer container 3c4e9abe538b6c834fdc69445c504896b119ed411442d8b9015bffecbaf72385. Mar 6 01:50:34.402203 containerd[1469]: time="2026-03-06T01:50:34.395981994Z" level=info msg="StartContainer for \"3c4e9abe538b6c834fdc69445c504896b119ed411442d8b9015bffecbaf72385\" returns successfully" Mar 6 01:50:34.421915 systemd[1]: cri-containerd-3c4e9abe538b6c834fdc69445c504896b119ed411442d8b9015bffecbaf72385.scope: Deactivated successfully. Mar 6 01:50:34.487362 containerd[1469]: time="2026-03-06T01:50:34.487179253Z" level=info msg="shim disconnected" id=3c4e9abe538b6c834fdc69445c504896b119ed411442d8b9015bffecbaf72385 namespace=k8s.io Mar 6 01:50:34.487362 containerd[1469]: time="2026-03-06T01:50:34.487348249Z" level=warning msg="cleaning up after shim disconnected" id=3c4e9abe538b6c834fdc69445c504896b119ed411442d8b9015bffecbaf72385 namespace=k8s.io Mar 6 01:50:34.487362 containerd[1469]: time="2026-03-06T01:50:34.487367144Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 6 01:50:34.662336 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c4e9abe538b6c834fdc69445c504896b119ed411442d8b9015bffecbaf72385-rootfs.mount: Deactivated successfully. Mar 6 01:50:34.757342 containerd[1469]: time="2026-03-06T01:50:34.757020377Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 6 01:50:35.131025 kubelet[2576]: E0306 01:50:35.130937 2576 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xv9jc" podUID="20ed6bd5-f019-43c9-934e-78717b2dba0c" Mar 6 01:50:37.130671 kubelet[2576]: E0306 01:50:37.130445 2576 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xv9jc" podUID="20ed6bd5-f019-43c9-934e-78717b2dba0c" Mar 6 01:50:39.133189 kubelet[2576]: E0306 01:50:39.132883 2576 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xv9jc" podUID="20ed6bd5-f019-43c9-934e-78717b2dba0c" Mar 6 01:50:39.398089 containerd[1469]: time="2026-03-06T01:50:39.397661110Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:50:39.401078 containerd[1469]: time="2026-03-06T01:50:39.399623904Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 6 01:50:39.401365 containerd[1469]: time="2026-03-06T01:50:39.401315582Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:50:39.404080 containerd[1469]: time="2026-03-06T01:50:39.404031033Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:50:39.405397 containerd[1469]: time="2026-03-06T01:50:39.405351447Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 4.648285002s" Mar 6 01:50:39.405397 containerd[1469]: time="2026-03-06T01:50:39.405397453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 6 01:50:39.412201 containerd[1469]: time="2026-03-06T01:50:39.412150178Z" level=info msg="CreateContainer within sandbox \"47c562f6e94c65113e1704667b49fb0be4903d0bf0feaeb9675c5e00b0741a6a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 6 01:50:39.438267 containerd[1469]: time="2026-03-06T01:50:39.438148289Z" level=info msg="CreateContainer within sandbox \"47c562f6e94c65113e1704667b49fb0be4903d0bf0feaeb9675c5e00b0741a6a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5d5720ab426d2631388d4ef0237f2ce118ee5ce183c6be9a834b347e81ee91f5\"" Mar 6 01:50:39.438952 containerd[1469]: time="2026-03-06T01:50:39.438930408Z" level=info msg="StartContainer for \"5d5720ab426d2631388d4ef0237f2ce118ee5ce183c6be9a834b347e81ee91f5\"" Mar 6 01:50:39.524632 systemd[1]: Started cri-containerd-5d5720ab426d2631388d4ef0237f2ce118ee5ce183c6be9a834b347e81ee91f5.scope - libcontainer container 5d5720ab426d2631388d4ef0237f2ce118ee5ce183c6be9a834b347e81ee91f5. Mar 6 01:50:39.566275 containerd[1469]: time="2026-03-06T01:50:39.566115383Z" level=info msg="StartContainer for \"5d5720ab426d2631388d4ef0237f2ce118ee5ce183c6be9a834b347e81ee91f5\" returns successfully" Mar 6 01:50:40.475684 systemd[1]: cri-containerd-5d5720ab426d2631388d4ef0237f2ce118ee5ce183c6be9a834b347e81ee91f5.scope: Deactivated successfully. Mar 6 01:50:40.476698 systemd[1]: cri-containerd-5d5720ab426d2631388d4ef0237f2ce118ee5ce183c6be9a834b347e81ee91f5.scope: Consumed 1.071s CPU time. Mar 6 01:50:40.518899 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d5720ab426d2631388d4ef0237f2ce118ee5ce183c6be9a834b347e81ee91f5-rootfs.mount: Deactivated successfully. Mar 6 01:50:40.554349 kubelet[2576]: I0306 01:50:40.554072 2576 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 6 01:50:40.595405 containerd[1469]: time="2026-03-06T01:50:40.595309928Z" level=info msg="shim disconnected" id=5d5720ab426d2631388d4ef0237f2ce118ee5ce183c6be9a834b347e81ee91f5 namespace=k8s.io Mar 6 01:50:40.595953 containerd[1469]: time="2026-03-06T01:50:40.595409534Z" level=warning msg="cleaning up after shim disconnected" id=5d5720ab426d2631388d4ef0237f2ce118ee5ce183c6be9a834b347e81ee91f5 namespace=k8s.io Mar 6 01:50:40.595953 containerd[1469]: time="2026-03-06T01:50:40.595425934Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 6 01:50:40.644398 kubelet[2576]: I0306 01:50:40.644170 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/36c75156-d099-419f-a2ec-938f6d71a9bf-calico-apiserver-certs\") pod \"calico-apiserver-8459ffd5ff-d94vx\" (UID: \"36c75156-d099-419f-a2ec-938f6d71a9bf\") " pod="calico-system/calico-apiserver-8459ffd5ff-d94vx" Mar 6 01:50:40.644398 kubelet[2576]: I0306 01:50:40.644381 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc0d15f8-a18a-41b3-a55e-0af73e5a2dcb-config-volume\") pod \"coredns-66bc5c9577-pl2df\" (UID: \"fc0d15f8-a18a-41b3-a55e-0af73e5a2dcb\") " pod="kube-system/coredns-66bc5c9577-pl2df" Mar 6 01:50:40.644606 kubelet[2576]: I0306 01:50:40.644417 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sllc\" (UniqueName: \"kubernetes.io/projected/36c75156-d099-419f-a2ec-938f6d71a9bf-kube-api-access-4sllc\") pod \"calico-apiserver-8459ffd5ff-d94vx\" (UID: \"36c75156-d099-419f-a2ec-938f6d71a9bf\") " pod="calico-system/calico-apiserver-8459ffd5ff-d94vx" Mar 6 01:50:40.644606 kubelet[2576]: I0306 01:50:40.644444 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqtxl\" (UniqueName: \"kubernetes.io/projected/fc0d15f8-a18a-41b3-a55e-0af73e5a2dcb-kube-api-access-fqtxl\") pod \"coredns-66bc5c9577-pl2df\" (UID: \"fc0d15f8-a18a-41b3-a55e-0af73e5a2dcb\") " pod="kube-system/coredns-66bc5c9577-pl2df" Mar 6 01:50:40.648795 systemd[1]: Created slice kubepods-besteffort-pod36c75156_d099_419f_a2ec_938f6d71a9bf.slice - libcontainer container kubepods-besteffort-pod36c75156_d099_419f_a2ec_938f6d71a9bf.slice. Mar 6 01:50:40.664084 systemd[1]: Created slice kubepods-burstable-podfc0d15f8_a18a_41b3_a55e_0af73e5a2dcb.slice - libcontainer container kubepods-burstable-podfc0d15f8_a18a_41b3_a55e_0af73e5a2dcb.slice. Mar 6 01:50:40.692747 systemd[1]: Created slice kubepods-besteffort-pod6664a973_0768_4772_9a16_4ab55bf393fa.slice - libcontainer container kubepods-besteffort-pod6664a973_0768_4772_9a16_4ab55bf393fa.slice. Mar 6 01:50:40.702816 systemd[1]: Created slice kubepods-besteffort-pod1851d6a8_7f92_4eab_9dc3_e0fb2b763487.slice - libcontainer container kubepods-besteffort-pod1851d6a8_7f92_4eab_9dc3_e0fb2b763487.slice. Mar 6 01:50:40.712592 systemd[1]: Created slice kubepods-burstable-pod8ed54e90_4fd2_4aae_b446_8c5b8cd922cd.slice - libcontainer container kubepods-burstable-pod8ed54e90_4fd2_4aae_b446_8c5b8cd922cd.slice. Mar 6 01:50:40.724188 systemd[1]: Created slice kubepods-besteffort-podb21f2ee9_d154_41f0_994a_34e4bb6425e8.slice - libcontainer container kubepods-besteffort-podb21f2ee9_d154_41f0_994a_34e4bb6425e8.slice. Mar 6 01:50:40.739978 systemd[1]: Created slice kubepods-besteffort-pod5b6282a8_6c40_45a0_9b20_38cca779da41.slice - libcontainer container kubepods-besteffort-pod5b6282a8_6c40_45a0_9b20_38cca779da41.slice. Mar 6 01:50:40.815060 containerd[1469]: time="2026-03-06T01:50:40.814850362Z" level=info msg="CreateContainer within sandbox \"47c562f6e94c65113e1704667b49fb0be4903d0bf0feaeb9675c5e00b0741a6a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 6 01:50:40.847457 kubelet[2576]: I0306 01:50:40.846802 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b6282a8-6c40-45a0-9b20-38cca779da41-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-km8qm\" (UID: \"5b6282a8-6c40-45a0-9b20-38cca779da41\") " pod="calico-system/goldmane-cccfbd5cf-km8qm" Mar 6 01:50:40.847457 kubelet[2576]: I0306 01:50:40.846850 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/b21f2ee9-d154-41f0-994a-34e4bb6425e8-nginx-config\") pod \"whisker-595756dd6-hffk5\" (UID: \"b21f2ee9-d154-41f0-994a-34e4bb6425e8\") " pod="calico-system/whisker-595756dd6-hffk5" Mar 6 01:50:40.847457 kubelet[2576]: I0306 01:50:40.846875 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b6282a8-6c40-45a0-9b20-38cca779da41-config\") pod \"goldmane-cccfbd5cf-km8qm\" (UID: \"5b6282a8-6c40-45a0-9b20-38cca779da41\") " pod="calico-system/goldmane-cccfbd5cf-km8qm" Mar 6 01:50:40.847457 kubelet[2576]: I0306 01:50:40.846901 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/5b6282a8-6c40-45a0-9b20-38cca779da41-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-km8qm\" (UID: \"5b6282a8-6c40-45a0-9b20-38cca779da41\") " pod="calico-system/goldmane-cccfbd5cf-km8qm" Mar 6 01:50:40.847457 kubelet[2576]: I0306 01:50:40.846929 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpx7v\" (UniqueName: \"kubernetes.io/projected/5b6282a8-6c40-45a0-9b20-38cca779da41-kube-api-access-dpx7v\") pod \"goldmane-cccfbd5cf-km8qm\" (UID: \"5b6282a8-6c40-45a0-9b20-38cca779da41\") " pod="calico-system/goldmane-cccfbd5cf-km8qm" Mar 6 01:50:40.847833 kubelet[2576]: I0306 01:50:40.846959 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1851d6a8-7f92-4eab-9dc3-e0fb2b763487-tigera-ca-bundle\") pod \"calico-kube-controllers-6569c6d5b5-lvltr\" (UID: \"1851d6a8-7f92-4eab-9dc3-e0fb2b763487\") " pod="calico-system/calico-kube-controllers-6569c6d5b5-lvltr" Mar 6 01:50:40.847833 kubelet[2576]: I0306 01:50:40.846991 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8ed54e90-4fd2-4aae-b446-8c5b8cd922cd-config-volume\") pod \"coredns-66bc5c9577-k659t\" (UID: \"8ed54e90-4fd2-4aae-b446-8c5b8cd922cd\") " pod="kube-system/coredns-66bc5c9577-k659t" Mar 6 01:50:40.847833 kubelet[2576]: I0306 01:50:40.847018 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzcbd\" (UniqueName: \"kubernetes.io/projected/8ed54e90-4fd2-4aae-b446-8c5b8cd922cd-kube-api-access-hzcbd\") pod \"coredns-66bc5c9577-k659t\" (UID: \"8ed54e90-4fd2-4aae-b446-8c5b8cd922cd\") " pod="kube-system/coredns-66bc5c9577-k659t" Mar 6 01:50:40.847833 kubelet[2576]: I0306 01:50:40.847042 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b21f2ee9-d154-41f0-994a-34e4bb6425e8-whisker-backend-key-pair\") pod \"whisker-595756dd6-hffk5\" (UID: \"b21f2ee9-d154-41f0-994a-34e4bb6425e8\") " pod="calico-system/whisker-595756dd6-hffk5" Mar 6 01:50:40.847833 kubelet[2576]: I0306 01:50:40.847069 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8plj5\" (UniqueName: \"kubernetes.io/projected/b21f2ee9-d154-41f0-994a-34e4bb6425e8-kube-api-access-8plj5\") pod \"whisker-595756dd6-hffk5\" (UID: \"b21f2ee9-d154-41f0-994a-34e4bb6425e8\") " pod="calico-system/whisker-595756dd6-hffk5" Mar 6 01:50:40.848818 kubelet[2576]: I0306 01:50:40.847108 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqg8p\" (UniqueName: \"kubernetes.io/projected/6664a973-0768-4772-9a16-4ab55bf393fa-kube-api-access-kqg8p\") pod \"calico-apiserver-8459ffd5ff-w4b5h\" (UID: \"6664a973-0768-4772-9a16-4ab55bf393fa\") " pod="calico-system/calico-apiserver-8459ffd5ff-w4b5h" Mar 6 01:50:40.848818 kubelet[2576]: I0306 01:50:40.847130 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wblgg\" (UniqueName: \"kubernetes.io/projected/1851d6a8-7f92-4eab-9dc3-e0fb2b763487-kube-api-access-wblgg\") pod \"calico-kube-controllers-6569c6d5b5-lvltr\" (UID: \"1851d6a8-7f92-4eab-9dc3-e0fb2b763487\") " pod="calico-system/calico-kube-controllers-6569c6d5b5-lvltr" Mar 6 01:50:40.848818 kubelet[2576]: I0306 01:50:40.847170 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b21f2ee9-d154-41f0-994a-34e4bb6425e8-whisker-ca-bundle\") pod \"whisker-595756dd6-hffk5\" (UID: \"b21f2ee9-d154-41f0-994a-34e4bb6425e8\") " pod="calico-system/whisker-595756dd6-hffk5" Mar 6 01:50:40.848818 kubelet[2576]: I0306 01:50:40.847329 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6664a973-0768-4772-9a16-4ab55bf393fa-calico-apiserver-certs\") pod \"calico-apiserver-8459ffd5ff-w4b5h\" (UID: \"6664a973-0768-4772-9a16-4ab55bf393fa\") " pod="calico-system/calico-apiserver-8459ffd5ff-w4b5h" Mar 6 01:50:40.850867 containerd[1469]: time="2026-03-06T01:50:40.850821159Z" level=info msg="CreateContainer within sandbox \"47c562f6e94c65113e1704667b49fb0be4903d0bf0feaeb9675c5e00b0741a6a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"80f118ece9ede264f87cbf93591f0002d68143b38f8eb544bb9c88f6c2fbd264\"" Mar 6 01:50:40.852644 containerd[1469]: time="2026-03-06T01:50:40.852373186Z" level=info msg="StartContainer for \"80f118ece9ede264f87cbf93591f0002d68143b38f8eb544bb9c88f6c2fbd264\"" Mar 6 01:50:40.905728 systemd[1]: Started cri-containerd-80f118ece9ede264f87cbf93591f0002d68143b38f8eb544bb9c88f6c2fbd264.scope - libcontainer container 80f118ece9ede264f87cbf93591f0002d68143b38f8eb544bb9c88f6c2fbd264. Mar 6 01:50:40.962906 containerd[1469]: time="2026-03-06T01:50:40.962801278Z" level=info msg="StartContainer for \"80f118ece9ede264f87cbf93591f0002d68143b38f8eb544bb9c88f6c2fbd264\" returns successfully" Mar 6 01:50:40.977201 containerd[1469]: time="2026-03-06T01:50:40.970701003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8459ffd5ff-d94vx,Uid:36c75156-d099-419f-a2ec-938f6d71a9bf,Namespace:calico-system,Attempt:0,}" Mar 6 01:50:40.996620 kubelet[2576]: E0306 01:50:40.996362 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:50:41.001374 containerd[1469]: time="2026-03-06T01:50:41.001331206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-pl2df,Uid:fc0d15f8-a18a-41b3-a55e-0af73e5a2dcb,Namespace:kube-system,Attempt:0,}" Mar 6 01:50:41.011008 containerd[1469]: time="2026-03-06T01:50:41.010911293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6569c6d5b5-lvltr,Uid:1851d6a8-7f92-4eab-9dc3-e0fb2b763487,Namespace:calico-system,Attempt:0,}" Mar 6 01:50:41.021089 kubelet[2576]: E0306 01:50:41.020817 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:50:41.022060 containerd[1469]: time="2026-03-06T01:50:41.021997449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-k659t,Uid:8ed54e90-4fd2-4aae-b446-8c5b8cd922cd,Namespace:kube-system,Attempt:0,}" Mar 6 01:50:41.036851 containerd[1469]: time="2026-03-06T01:50:41.036722241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-595756dd6-hffk5,Uid:b21f2ee9-d154-41f0-994a-34e4bb6425e8,Namespace:calico-system,Attempt:0,}" Mar 6 01:50:41.052971 containerd[1469]: time="2026-03-06T01:50:41.052828695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-km8qm,Uid:5b6282a8-6c40-45a0-9b20-38cca779da41,Namespace:calico-system,Attempt:0,}" Mar 6 01:50:41.143661 systemd[1]: Created slice kubepods-besteffort-pod20ed6bd5_f019_43c9_934e_78717b2dba0c.slice - libcontainer container kubepods-besteffort-pod20ed6bd5_f019_43c9_934e_78717b2dba0c.slice. Mar 6 01:50:41.153323 containerd[1469]: time="2026-03-06T01:50:41.152774222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xv9jc,Uid:20ed6bd5-f019-43c9-934e-78717b2dba0c,Namespace:calico-system,Attempt:0,}" Mar 6 01:50:41.303859 containerd[1469]: time="2026-03-06T01:50:41.303719152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8459ffd5ff-w4b5h,Uid:6664a973-0768-4772-9a16-4ab55bf393fa,Namespace:calico-system,Attempt:0,}" Mar 6 01:50:41.379169 containerd[1469]: time="2026-03-06T01:50:41.378921161Z" level=error msg="Failed to destroy network for sandbox \"818b1552eda146362700bf8149049d29e41f1775a68f72ce65d131dc165f920b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:50:41.380804 containerd[1469]: time="2026-03-06T01:50:41.380715149Z" level=error msg="encountered an error cleaning up failed sandbox \"818b1552eda146362700bf8149049d29e41f1775a68f72ce65d131dc165f920b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:50:41.380863 containerd[1469]: time="2026-03-06T01:50:41.380804185Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8459ffd5ff-d94vx,Uid:36c75156-d099-419f-a2ec-938f6d71a9bf,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"818b1552eda146362700bf8149049d29e41f1775a68f72ce65d131dc165f920b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:50:41.390617 kubelet[2576]: E0306 01:50:41.390444 2576 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"818b1552eda146362700bf8149049d29e41f1775a68f72ce65d131dc165f920b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:50:41.390617 kubelet[2576]: E0306 01:50:41.390583 2576 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"818b1552eda146362700bf8149049d29e41f1775a68f72ce65d131dc165f920b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-8459ffd5ff-d94vx" Mar 6 01:50:41.390617 kubelet[2576]: E0306 01:50:41.390611 2576 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"818b1552eda146362700bf8149049d29e41f1775a68f72ce65d131dc165f920b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-8459ffd5ff-d94vx" Mar 6 01:50:41.391308 kubelet[2576]: E0306 01:50:41.390656 2576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8459ffd5ff-d94vx_calico-system(36c75156-d099-419f-a2ec-938f6d71a9bf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8459ffd5ff-d94vx_calico-system(36c75156-d099-419f-a2ec-938f6d71a9bf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"818b1552eda146362700bf8149049d29e41f1775a68f72ce65d131dc165f920b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-8459ffd5ff-d94vx" podUID="36c75156-d099-419f-a2ec-938f6d71a9bf" Mar 6 01:50:41.427785 containerd[1469]: time="2026-03-06T01:50:41.425537383Z" level=error msg="Failed to destroy network for sandbox \"b3edfa8dea300e40160885510f221aabc9dbf2ba81fafe7743367a028bf4969d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:50:41.429662 containerd[1469]: time="2026-03-06T01:50:41.429467438Z" level=error msg="encountered an error cleaning up failed sandbox \"b3edfa8dea300e40160885510f221aabc9dbf2ba81fafe7743367a028bf4969d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:50:41.430059 containerd[1469]: time="2026-03-06T01:50:41.429810088Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-k659t,Uid:8ed54e90-4fd2-4aae-b446-8c5b8cd922cd,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b3edfa8dea300e40160885510f221aabc9dbf2ba81fafe7743367a028bf4969d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:50:41.431097 kubelet[2576]: E0306 01:50:41.431058 2576 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3edfa8dea300e40160885510f221aabc9dbf2ba81fafe7743367a028bf4969d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:50:41.431318 kubelet[2576]: E0306 01:50:41.431193 2576 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3edfa8dea300e40160885510f221aabc9dbf2ba81fafe7743367a028bf4969d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-k659t" Mar 6 01:50:41.431892 kubelet[2576]: E0306 01:50:41.431409 2576 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3edfa8dea300e40160885510f221aabc9dbf2ba81fafe7743367a028bf4969d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-k659t" Mar 6 01:50:41.431892 kubelet[2576]: E0306 01:50:41.431477 2576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-k659t_kube-system(8ed54e90-4fd2-4aae-b446-8c5b8cd922cd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-k659t_kube-system(8ed54e90-4fd2-4aae-b446-8c5b8cd922cd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b3edfa8dea300e40160885510f221aabc9dbf2ba81fafe7743367a028bf4969d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-k659t" podUID="8ed54e90-4fd2-4aae-b446-8c5b8cd922cd" Mar 6 01:50:41.751780 containerd[1469]: 2026-03-06 01:50:41.590 [INFO][3606] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5426ebf1f65129ff817a269c72855d74a77a135085d1ac2b8c61d2ed962fd076" Mar 6 01:50:41.751780 containerd[1469]: 2026-03-06 01:50:41.590 [INFO][3606] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5426ebf1f65129ff817a269c72855d74a77a135085d1ac2b8c61d2ed962fd076" iface="eth0" netns="/var/run/netns/cni-88a85819-53ee-2213-15f7-07fbeef3a4d9" Mar 6 01:50:41.751780 containerd[1469]: 2026-03-06 01:50:41.590 [INFO][3606] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5426ebf1f65129ff817a269c72855d74a77a135085d1ac2b8c61d2ed962fd076" iface="eth0" netns="/var/run/netns/cni-88a85819-53ee-2213-15f7-07fbeef3a4d9" Mar 6 01:50:41.751780 containerd[1469]: 2026-03-06 01:50:41.592 [INFO][3606] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5426ebf1f65129ff817a269c72855d74a77a135085d1ac2b8c61d2ed962fd076" iface="eth0" netns="/var/run/netns/cni-88a85819-53ee-2213-15f7-07fbeef3a4d9" Mar 6 01:50:41.751780 containerd[1469]: 2026-03-06 01:50:41.593 [INFO][3606] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5426ebf1f65129ff817a269c72855d74a77a135085d1ac2b8c61d2ed962fd076" Mar 6 01:50:41.751780 containerd[1469]: 2026-03-06 01:50:41.593 [INFO][3606] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5426ebf1f65129ff817a269c72855d74a77a135085d1ac2b8c61d2ed962fd076" Mar 6 01:50:41.751780 containerd[1469]: 2026-03-06 01:50:41.704 [INFO][3692] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5426ebf1f65129ff817a269c72855d74a77a135085d1ac2b8c61d2ed962fd076" HandleID="k8s-pod-network.5426ebf1f65129ff817a269c72855d74a77a135085d1ac2b8c61d2ed962fd076" Workload="localhost-k8s-calico--kube--controllers--6569c6d5b5--lvltr-eth0" Mar 6 01:50:41.751780 containerd[1469]: 2026-03-06 01:50:41.704 [INFO][3692] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:50:41.751780 containerd[1469]: 2026-03-06 01:50:41.712 [INFO][3692] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:50:41.751780 containerd[1469]: 2026-03-06 01:50:41.728 [WARNING][3692] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5426ebf1f65129ff817a269c72855d74a77a135085d1ac2b8c61d2ed962fd076" HandleID="k8s-pod-network.5426ebf1f65129ff817a269c72855d74a77a135085d1ac2b8c61d2ed962fd076" Workload="localhost-k8s-calico--kube--controllers--6569c6d5b5--lvltr-eth0" Mar 6 01:50:41.751780 containerd[1469]: 2026-03-06 01:50:41.728 [INFO][3692] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5426ebf1f65129ff817a269c72855d74a77a135085d1ac2b8c61d2ed962fd076" HandleID="k8s-pod-network.5426ebf1f65129ff817a269c72855d74a77a135085d1ac2b8c61d2ed962fd076" Workload="localhost-k8s-calico--kube--controllers--6569c6d5b5--lvltr-eth0" Mar 6 01:50:41.751780 containerd[1469]: 2026-03-06 01:50:41.736 [INFO][3692] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:50:41.751780 containerd[1469]: 2026-03-06 01:50:41.742 [INFO][3606] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5426ebf1f65129ff817a269c72855d74a77a135085d1ac2b8c61d2ed962fd076" Mar 6 01:50:41.755372 containerd[1469]: 2026-03-06 01:50:41.573 [INFO][3631] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0fed5727c696602a07d6e95fca9dfd8e60548af9e94697e3141818291e471421" Mar 6 01:50:41.755372 containerd[1469]: 2026-03-06 01:50:41.574 [INFO][3631] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0fed5727c696602a07d6e95fca9dfd8e60548af9e94697e3141818291e471421" iface="eth0" netns="/var/run/netns/cni-bb89b744-9eb8-74af-a3fa-d82e557b9334" Mar 6 01:50:41.755372 containerd[1469]: 2026-03-06 01:50:41.574 [INFO][3631] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0fed5727c696602a07d6e95fca9dfd8e60548af9e94697e3141818291e471421" iface="eth0" netns="/var/run/netns/cni-bb89b744-9eb8-74af-a3fa-d82e557b9334" Mar 6 01:50:41.755372 containerd[1469]: 2026-03-06 01:50:41.575 [INFO][3631] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0fed5727c696602a07d6e95fca9dfd8e60548af9e94697e3141818291e471421" iface="eth0" netns="/var/run/netns/cni-bb89b744-9eb8-74af-a3fa-d82e557b9334" Mar 6 01:50:41.755372 containerd[1469]: 2026-03-06 01:50:41.575 [INFO][3631] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0fed5727c696602a07d6e95fca9dfd8e60548af9e94697e3141818291e471421" Mar 6 01:50:41.755372 containerd[1469]: 2026-03-06 01:50:41.575 [INFO][3631] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0fed5727c696602a07d6e95fca9dfd8e60548af9e94697e3141818291e471421" Mar 6 01:50:41.755372 containerd[1469]: 2026-03-06 01:50:41.682 [INFO][3685] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0fed5727c696602a07d6e95fca9dfd8e60548af9e94697e3141818291e471421" HandleID="k8s-pod-network.0fed5727c696602a07d6e95fca9dfd8e60548af9e94697e3141818291e471421" Workload="localhost-k8s-goldmane--cccfbd5cf--km8qm-eth0" Mar 6 01:50:41.755372 containerd[1469]: 2026-03-06 01:50:41.683 [INFO][3685] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:50:41.755372 containerd[1469]: 2026-03-06 01:50:41.684 [INFO][3685] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:50:41.755372 containerd[1469]: 2026-03-06 01:50:41.707 [WARNING][3685] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0fed5727c696602a07d6e95fca9dfd8e60548af9e94697e3141818291e471421" HandleID="k8s-pod-network.0fed5727c696602a07d6e95fca9dfd8e60548af9e94697e3141818291e471421" Workload="localhost-k8s-goldmane--cccfbd5cf--km8qm-eth0" Mar 6 01:50:41.755372 containerd[1469]: 2026-03-06 01:50:41.708 [INFO][3685] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0fed5727c696602a07d6e95fca9dfd8e60548af9e94697e3141818291e471421" HandleID="k8s-pod-network.0fed5727c696602a07d6e95fca9dfd8e60548af9e94697e3141818291e471421" Workload="localhost-k8s-goldmane--cccfbd5cf--km8qm-eth0" Mar 6 01:50:41.755372 containerd[1469]: 2026-03-06 01:50:41.712 [INFO][3685] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:50:41.755372 containerd[1469]: 2026-03-06 01:50:41.734 [INFO][3631] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0fed5727c696602a07d6e95fca9dfd8e60548af9e94697e3141818291e471421" Mar 6 01:50:41.757158 systemd[1]: run-netns-cni\x2d88a85819\x2d53ee\x2d2213\x2d15f7\x2d07fbeef3a4d9.mount: Deactivated successfully. Mar 6 01:50:41.765847 systemd[1]: run-netns-cni\x2dbb89b744\x2d9eb8\x2d74af\x2da3fa\x2dd82e557b9334.mount: Deactivated successfully. Mar 6 01:50:41.765996 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0fed5727c696602a07d6e95fca9dfd8e60548af9e94697e3141818291e471421-shm.mount: Deactivated successfully. Mar 6 01:50:41.771955 containerd[1469]: time="2026-03-06T01:50:41.771669170Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-km8qm,Uid:5b6282a8-6c40-45a0-9b20-38cca779da41,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0fed5727c696602a07d6e95fca9dfd8e60548af9e94697e3141818291e471421\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:50:41.772058 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5426ebf1f65129ff817a269c72855d74a77a135085d1ac2b8c61d2ed962fd076-shm.mount: Deactivated successfully. Mar 6 01:50:41.772819 kubelet[2576]: E0306 01:50:41.772599 2576 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fed5727c696602a07d6e95fca9dfd8e60548af9e94697e3141818291e471421\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:50:41.772819 kubelet[2576]: E0306 01:50:41.772710 2576 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fed5727c696602a07d6e95fca9dfd8e60548af9e94697e3141818291e471421\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-km8qm" Mar 6 01:50:41.772819 kubelet[2576]: E0306 01:50:41.772739 2576 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fed5727c696602a07d6e95fca9dfd8e60548af9e94697e3141818291e471421\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-km8qm" Mar 6 01:50:41.774341 kubelet[2576]: E0306 01:50:41.772788 2576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-cccfbd5cf-km8qm_calico-system(5b6282a8-6c40-45a0-9b20-38cca779da41)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-cccfbd5cf-km8qm_calico-system(5b6282a8-6c40-45a0-9b20-38cca779da41)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0fed5727c696602a07d6e95fca9dfd8e60548af9e94697e3141818291e471421\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-km8qm" podUID="5b6282a8-6c40-45a0-9b20-38cca779da41" Mar 6 01:50:41.779929 containerd[1469]: time="2026-03-06T01:50:41.779859258Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6569c6d5b5-lvltr,Uid:1851d6a8-7f92-4eab-9dc3-e0fb2b763487,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5426ebf1f65129ff817a269c72855d74a77a135085d1ac2b8c61d2ed962fd076\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:50:41.780114 kubelet[2576]: E0306 01:50:41.780034 2576 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5426ebf1f65129ff817a269c72855d74a77a135085d1ac2b8c61d2ed962fd076\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:50:41.780114 kubelet[2576]: E0306 01:50:41.780070 2576 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5426ebf1f65129ff817a269c72855d74a77a135085d1ac2b8c61d2ed962fd076\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6569c6d5b5-lvltr" Mar 6 01:50:41.780114 kubelet[2576]: E0306 01:50:41.780087 2576 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5426ebf1f65129ff817a269c72855d74a77a135085d1ac2b8c61d2ed962fd076\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6569c6d5b5-lvltr" Mar 6 01:50:41.780376 kubelet[2576]: E0306 01:50:41.780124 2576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6569c6d5b5-lvltr_calico-system(1851d6a8-7f92-4eab-9dc3-e0fb2b763487)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6569c6d5b5-lvltr_calico-system(1851d6a8-7f92-4eab-9dc3-e0fb2b763487)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5426ebf1f65129ff817a269c72855d74a77a135085d1ac2b8c61d2ed962fd076\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6569c6d5b5-lvltr" podUID="1851d6a8-7f92-4eab-9dc3-e0fb2b763487" Mar 6 01:50:41.790350 containerd[1469]: 2026-03-06 01:50:41.548 [INFO][3557] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d5203771c7b8e911dd0e3ff92d5a24e626188d906162bd75f7e66baba1840d0d" Mar 6 01:50:41.790350 containerd[1469]: 2026-03-06 01:50:41.549 [INFO][3557] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d5203771c7b8e911dd0e3ff92d5a24e626188d906162bd75f7e66baba1840d0d" iface="eth0" netns="/var/run/netns/cni-bfe2f2b3-81fa-a9d9-4170-ddab3676f4cd" Mar 6 01:50:41.790350 containerd[1469]: 2026-03-06 01:50:41.549 [INFO][3557] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d5203771c7b8e911dd0e3ff92d5a24e626188d906162bd75f7e66baba1840d0d" iface="eth0" netns="/var/run/netns/cni-bfe2f2b3-81fa-a9d9-4170-ddab3676f4cd" Mar 6 01:50:41.790350 containerd[1469]: 2026-03-06 01:50:41.550 [INFO][3557] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d5203771c7b8e911dd0e3ff92d5a24e626188d906162bd75f7e66baba1840d0d" iface="eth0" netns="/var/run/netns/cni-bfe2f2b3-81fa-a9d9-4170-ddab3676f4cd" Mar 6 01:50:41.790350 containerd[1469]: 2026-03-06 01:50:41.550 [INFO][3557] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d5203771c7b8e911dd0e3ff92d5a24e626188d906162bd75f7e66baba1840d0d" Mar 6 01:50:41.790350 containerd[1469]: 2026-03-06 01:50:41.550 [INFO][3557] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d5203771c7b8e911dd0e3ff92d5a24e626188d906162bd75f7e66baba1840d0d" Mar 6 01:50:41.790350 containerd[1469]: 2026-03-06 01:50:41.707 [INFO][3673] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d5203771c7b8e911dd0e3ff92d5a24e626188d906162bd75f7e66baba1840d0d" HandleID="k8s-pod-network.d5203771c7b8e911dd0e3ff92d5a24e626188d906162bd75f7e66baba1840d0d" Workload="localhost-k8s-coredns--66bc5c9577--pl2df-eth0" Mar 6 01:50:41.790350 containerd[1469]: 2026-03-06 01:50:41.707 [INFO][3673] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:50:41.790350 containerd[1469]: 2026-03-06 01:50:41.738 [INFO][3673] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:50:41.790350 containerd[1469]: 2026-03-06 01:50:41.757 [WARNING][3673] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d5203771c7b8e911dd0e3ff92d5a24e626188d906162bd75f7e66baba1840d0d" HandleID="k8s-pod-network.d5203771c7b8e911dd0e3ff92d5a24e626188d906162bd75f7e66baba1840d0d" Workload="localhost-k8s-coredns--66bc5c9577--pl2df-eth0" Mar 6 01:50:41.790350 containerd[1469]: 2026-03-06 01:50:41.761 [INFO][3673] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d5203771c7b8e911dd0e3ff92d5a24e626188d906162bd75f7e66baba1840d0d" HandleID="k8s-pod-network.d5203771c7b8e911dd0e3ff92d5a24e626188d906162bd75f7e66baba1840d0d" Workload="localhost-k8s-coredns--66bc5c9577--pl2df-eth0" Mar 6 01:50:41.790350 containerd[1469]: 2026-03-06 01:50:41.770 [INFO][3673] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:50:41.790350 containerd[1469]: 2026-03-06 01:50:41.784 [INFO][3557] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d5203771c7b8e911dd0e3ff92d5a24e626188d906162bd75f7e66baba1840d0d" Mar 6 01:50:41.796794 systemd[1]: run-netns-cni\x2dbfe2f2b3\x2d81fa\x2da9d9\x2d4170\x2dddab3676f4cd.mount: Deactivated successfully. Mar 6 01:50:41.800125 containerd[1469]: time="2026-03-06T01:50:41.799983371Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-pl2df,Uid:fc0d15f8-a18a-41b3-a55e-0af73e5a2dcb,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d5203771c7b8e911dd0e3ff92d5a24e626188d906162bd75f7e66baba1840d0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:50:41.800198 kubelet[2576]: I0306 01:50:41.800144 2576 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3edfa8dea300e40160885510f221aabc9dbf2ba81fafe7743367a028bf4969d" Mar 6 01:50:41.800645 kubelet[2576]: E0306 01:50:41.800581 2576 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5203771c7b8e911dd0e3ff92d5a24e626188d906162bd75f7e66baba1840d0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:50:41.800701 kubelet[2576]: E0306 01:50:41.800672 2576 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5203771c7b8e911dd0e3ff92d5a24e626188d906162bd75f7e66baba1840d0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-pl2df" Mar 6 01:50:41.800727 kubelet[2576]: E0306 01:50:41.800698 2576 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5203771c7b8e911dd0e3ff92d5a24e626188d906162bd75f7e66baba1840d0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-pl2df" Mar 6 01:50:41.801005 kubelet[2576]: E0306 01:50:41.800961 2576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-pl2df_kube-system(fc0d15f8-a18a-41b3-a55e-0af73e5a2dcb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-pl2df_kube-system(fc0d15f8-a18a-41b3-a55e-0af73e5a2dcb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d5203771c7b8e911dd0e3ff92d5a24e626188d906162bd75f7e66baba1840d0d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-pl2df" podUID="fc0d15f8-a18a-41b3-a55e-0af73e5a2dcb" Mar 6 01:50:41.805755 kubelet[2576]: I0306 01:50:41.805074 2576 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="818b1552eda146362700bf8149049d29e41f1775a68f72ce65d131dc165f920b" Mar 6 01:50:41.811685 containerd[1469]: time="2026-03-06T01:50:41.811098969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6569c6d5b5-lvltr,Uid:1851d6a8-7f92-4eab-9dc3-e0fb2b763487,Namespace:calico-system,Attempt:0,}" Mar 6 01:50:41.816447 containerd[1469]: time="2026-03-06T01:50:41.816092232Z" level=info msg="StopPodSandbox for \"818b1552eda146362700bf8149049d29e41f1775a68f72ce65d131dc165f920b\"" Mar 6 01:50:41.817362 containerd[1469]: time="2026-03-06T01:50:41.816949900Z" level=info msg="StopPodSandbox for \"b3edfa8dea300e40160885510f221aabc9dbf2ba81fafe7743367a028bf4969d\"" Mar 6 01:50:41.818201 containerd[1469]: time="2026-03-06T01:50:41.817929902Z" level=info msg="Ensure that sandbox 818b1552eda146362700bf8149049d29e41f1775a68f72ce65d131dc165f920b in task-service has been cleanup successfully" Mar 6 01:50:41.818201 containerd[1469]: time="2026-03-06T01:50:41.817949195Z" level=info msg="Ensure that sandbox b3edfa8dea300e40160885510f221aabc9dbf2ba81fafe7743367a028bf4969d in task-service has been cleanup successfully" Mar 6 01:50:41.819781 containerd[1469]: time="2026-03-06T01:50:41.819198117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-km8qm,Uid:5b6282a8-6c40-45a0-9b20-38cca779da41,Namespace:calico-system,Attempt:0,}" Mar 6 01:50:41.826376 kubelet[2576]: I0306 01:50:41.826156 2576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-z4w7n" podStartSLOduration=3.6987752560000002 podStartE2EDuration="19.826136395s" podCreationTimestamp="2026-03-06 01:50:22 +0000 UTC" firstStartedPulling="2026-03-06 01:50:23.279002058 +0000 UTC m=+19.516570954" lastFinishedPulling="2026-03-06 01:50:39.406363186 +0000 UTC m=+35.643932093" observedRunningTime="2026-03-06 01:50:41.82219186 +0000 UTC m=+38.059760758" watchObservedRunningTime="2026-03-06 01:50:41.826136395 +0000 UTC m=+38.063705292" Mar 6 01:50:42.528724 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d5203771c7b8e911dd0e3ff92d5a24e626188d906162bd75f7e66baba1840d0d-shm.mount: Deactivated successfully. Mar 6 01:50:42.810745 kubelet[2576]: I0306 01:50:42.809704 2576 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 6 01:50:42.813168 kubelet[2576]: E0306 01:50:42.813058 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:50:42.813748 containerd[1469]: time="2026-03-06T01:50:42.813658436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-pl2df,Uid:fc0d15f8-a18a-41b3-a55e-0af73e5a2dcb,Namespace:kube-system,Attempt:0,}" Mar 6 01:50:42.970855 systemd-networkd[1405]: calibe659d715cd: Link UP Mar 6 01:50:42.974651 systemd-networkd[1405]: calibe659d715cd: Gained carrier Mar 6 01:50:42.987592 containerd[1469]: 2026-03-06 01:50:41.628 [INFO][3637] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fd82897d6b999a8125851bbf9fc9bd5f5f9432a09f8b549e95c2fc1941575b1a" Mar 6 01:50:42.987592 containerd[1469]: 2026-03-06 01:50:41.629 [INFO][3637] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fd82897d6b999a8125851bbf9fc9bd5f5f9432a09f8b549e95c2fc1941575b1a" iface="eth0" netns="/var/run/netns/cni-ed3c1887-203a-fdfe-e7de-8f850c8e362a" Mar 6 01:50:42.987592 containerd[1469]: 2026-03-06 01:50:41.629 [INFO][3637] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fd82897d6b999a8125851bbf9fc9bd5f5f9432a09f8b549e95c2fc1941575b1a" iface="eth0" netns="/var/run/netns/cni-ed3c1887-203a-fdfe-e7de-8f850c8e362a" Mar 6 01:50:42.987592 containerd[1469]: 2026-03-06 01:50:41.639 [INFO][3637] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fd82897d6b999a8125851bbf9fc9bd5f5f9432a09f8b549e95c2fc1941575b1a" iface="eth0" netns="/var/run/netns/cni-ed3c1887-203a-fdfe-e7de-8f850c8e362a" Mar 6 01:50:42.987592 containerd[1469]: 2026-03-06 01:50:41.639 [INFO][3637] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fd82897d6b999a8125851bbf9fc9bd5f5f9432a09f8b549e95c2fc1941575b1a" Mar 6 01:50:42.987592 containerd[1469]: 2026-03-06 01:50:41.639 [INFO][3637] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fd82897d6b999a8125851bbf9fc9bd5f5f9432a09f8b549e95c2fc1941575b1a" Mar 6 01:50:42.987592 containerd[1469]: 2026-03-06 01:50:41.727 [INFO][3709] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fd82897d6b999a8125851bbf9fc9bd5f5f9432a09f8b549e95c2fc1941575b1a" HandleID="k8s-pod-network.fd82897d6b999a8125851bbf9fc9bd5f5f9432a09f8b549e95c2fc1941575b1a" Workload="localhost-k8s-whisker--595756dd6--hffk5-eth0" Mar 6 01:50:42.987592 containerd[1469]: 2026-03-06 01:50:41.728 [INFO][3709] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:50:42.987592 containerd[1469]: 2026-03-06 01:50:42.929 [INFO][3709] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:50:42.987592 containerd[1469]: 2026-03-06 01:50:42.941 [WARNING][3709] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fd82897d6b999a8125851bbf9fc9bd5f5f9432a09f8b549e95c2fc1941575b1a" HandleID="k8s-pod-network.fd82897d6b999a8125851bbf9fc9bd5f5f9432a09f8b549e95c2fc1941575b1a" Workload="localhost-k8s-whisker--595756dd6--hffk5-eth0" Mar 6 01:50:42.987592 containerd[1469]: 2026-03-06 01:50:42.941 [INFO][3709] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fd82897d6b999a8125851bbf9fc9bd5f5f9432a09f8b549e95c2fc1941575b1a" HandleID="k8s-pod-network.fd82897d6b999a8125851bbf9fc9bd5f5f9432a09f8b549e95c2fc1941575b1a" Workload="localhost-k8s-whisker--595756dd6--hffk5-eth0" Mar 6 01:50:42.987592 containerd[1469]: 2026-03-06 01:50:42.949 [INFO][3709] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:50:42.987592 containerd[1469]: 2026-03-06 01:50:42.958 [INFO][3637] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fd82897d6b999a8125851bbf9fc9bd5f5f9432a09f8b549e95c2fc1941575b1a" Mar 6 01:50:42.992996 systemd[1]: run-netns-cni\x2ded3c1887\x2d203a\x2dfdfe\x2de7de\x2d8f850c8e362a.mount: Deactivated successfully. Mar 6 01:50:42.993172 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fd82897d6b999a8125851bbf9fc9bd5f5f9432a09f8b549e95c2fc1941575b1a-shm.mount: Deactivated successfully. Mar 6 01:50:43.000848 containerd[1469]: time="2026-03-06T01:50:43.000786536Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-595756dd6-hffk5,Uid:b21f2ee9-d154-41f0-994a-34e4bb6425e8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fd82897d6b999a8125851bbf9fc9bd5f5f9432a09f8b549e95c2fc1941575b1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:50:43.001990 kubelet[2576]: E0306 01:50:43.001909 2576 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd82897d6b999a8125851bbf9fc9bd5f5f9432a09f8b549e95c2fc1941575b1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:50:43.002324 kubelet[2576]: E0306 01:50:43.001987 2576 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd82897d6b999a8125851bbf9fc9bd5f5f9432a09f8b549e95c2fc1941575b1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-595756dd6-hffk5" Mar 6 01:50:43.034359 containerd[1469]: 2026-03-06 01:50:41.604 [INFO][3643] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8159ef1bded72c59accfe448e8a7b8f27db29d86d86a6c1a8bee68985750e151" Mar 6 01:50:43.034359 containerd[1469]: 2026-03-06 01:50:41.608 [INFO][3643] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8159ef1bded72c59accfe448e8a7b8f27db29d86d86a6c1a8bee68985750e151" iface="eth0" netns="/var/run/netns/cni-96cec84c-3267-5315-d6f6-d4c97b7d0bdd" Mar 6 01:50:43.034359 containerd[1469]: 2026-03-06 01:50:41.609 [INFO][3643] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8159ef1bded72c59accfe448e8a7b8f27db29d86d86a6c1a8bee68985750e151" iface="eth0" netns="/var/run/netns/cni-96cec84c-3267-5315-d6f6-d4c97b7d0bdd" Mar 6 01:50:43.034359 containerd[1469]: 2026-03-06 01:50:41.609 [INFO][3643] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8159ef1bded72c59accfe448e8a7b8f27db29d86d86a6c1a8bee68985750e151" iface="eth0" netns="/var/run/netns/cni-96cec84c-3267-5315-d6f6-d4c97b7d0bdd" Mar 6 01:50:43.034359 containerd[1469]: 2026-03-06 01:50:41.609 [INFO][3643] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8159ef1bded72c59accfe448e8a7b8f27db29d86d86a6c1a8bee68985750e151" Mar 6 01:50:43.034359 containerd[1469]: 2026-03-06 01:50:41.609 [INFO][3643] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8159ef1bded72c59accfe448e8a7b8f27db29d86d86a6c1a8bee68985750e151" Mar 6 01:50:43.034359 containerd[1469]: 2026-03-06 01:50:41.769 [INFO][3700] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8159ef1bded72c59accfe448e8a7b8f27db29d86d86a6c1a8bee68985750e151" HandleID="k8s-pod-network.8159ef1bded72c59accfe448e8a7b8f27db29d86d86a6c1a8bee68985750e151" Workload="localhost-k8s-csi--node--driver--xv9jc-eth0" Mar 6 01:50:43.034359 containerd[1469]: 2026-03-06 01:50:41.774 [INFO][3700] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:50:43.034359 containerd[1469]: 2026-03-06 01:50:42.944 [INFO][3700] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:50:43.034359 containerd[1469]: 2026-03-06 01:50:42.973 [WARNING][3700] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8159ef1bded72c59accfe448e8a7b8f27db29d86d86a6c1a8bee68985750e151" HandleID="k8s-pod-network.8159ef1bded72c59accfe448e8a7b8f27db29d86d86a6c1a8bee68985750e151" Workload="localhost-k8s-csi--node--driver--xv9jc-eth0" Mar 6 01:50:43.034359 containerd[1469]: 2026-03-06 01:50:42.977 [INFO][3700] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8159ef1bded72c59accfe448e8a7b8f27db29d86d86a6c1a8bee68985750e151" HandleID="k8s-pod-network.8159ef1bded72c59accfe448e8a7b8f27db29d86d86a6c1a8bee68985750e151" Workload="localhost-k8s-csi--node--driver--xv9jc-eth0" Mar 6 01:50:43.034359 containerd[1469]: 2026-03-06 01:50:43.006 [INFO][3700] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:50:43.034359 containerd[1469]: 2026-03-06 01:50:43.026 [INFO][3643] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8159ef1bded72c59accfe448e8a7b8f27db29d86d86a6c1a8bee68985750e151" Mar 6 01:50:43.043633 containerd[1469]: time="2026-03-06T01:50:43.043540360Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xv9jc,Uid:20ed6bd5-f019-43c9-934e-78717b2dba0c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8159ef1bded72c59accfe448e8a7b8f27db29d86d86a6c1a8bee68985750e151\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:50:43.048287 kubelet[2576]: E0306 01:50:43.045618 2576 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8159ef1bded72c59accfe448e8a7b8f27db29d86d86a6c1a8bee68985750e151\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:50:43.048287 kubelet[2576]: E0306 01:50:43.046570 2576 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8159ef1bded72c59accfe448e8a7b8f27db29d86d86a6c1a8bee68985750e151\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xv9jc" Mar 6 01:50:43.048287 kubelet[2576]: E0306 01:50:43.046597 2576 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8159ef1bded72c59accfe448e8a7b8f27db29d86d86a6c1a8bee68985750e151\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xv9jc" Mar 6 01:50:43.048402 kubelet[2576]: E0306 01:50:43.046653 2576 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xv9jc_calico-system(20ed6bd5-f019-43c9-934e-78717b2dba0c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xv9jc_calico-system(20ed6bd5-f019-43c9-934e-78717b2dba0c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8159ef1bded72c59accfe448e8a7b8f27db29d86d86a6c1a8bee68985750e151\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xv9jc" podUID="20ed6bd5-f019-43c9-934e-78717b2dba0c" Mar 6 01:50:43.050723 containerd[1469]: 2026-03-06 01:50:41.908 [INFO][3744] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b3edfa8dea300e40160885510f221aabc9dbf2ba81fafe7743367a028bf4969d" Mar 6 01:50:43.050723 containerd[1469]: 2026-03-06 01:50:41.908 [INFO][3744] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b3edfa8dea300e40160885510f221aabc9dbf2ba81fafe7743367a028bf4969d" iface="eth0" netns="/var/run/netns/cni-61d53ec6-4141-b11c-8f8c-d0d147d7ac03" Mar 6 01:50:43.050723 containerd[1469]: 2026-03-06 01:50:41.909 [INFO][3744] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b3edfa8dea300e40160885510f221aabc9dbf2ba81fafe7743367a028bf4969d" iface="eth0" netns="/var/run/netns/cni-61d53ec6-4141-b11c-8f8c-d0d147d7ac03" Mar 6 01:50:43.050723 containerd[1469]: 2026-03-06 01:50:41.917 [INFO][3744] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b3edfa8dea300e40160885510f221aabc9dbf2ba81fafe7743367a028bf4969d" iface="eth0" netns="/var/run/netns/cni-61d53ec6-4141-b11c-8f8c-d0d147d7ac03" Mar 6 01:50:43.050723 containerd[1469]: 2026-03-06 01:50:41.917 [INFO][3744] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b3edfa8dea300e40160885510f221aabc9dbf2ba81fafe7743367a028bf4969d" Mar 6 01:50:43.050723 containerd[1469]: 2026-03-06 01:50:41.917 [INFO][3744] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b3edfa8dea300e40160885510f221aabc9dbf2ba81fafe7743367a028bf4969d" Mar 6 01:50:43.050723 containerd[1469]: 2026-03-06 01:50:41.979 [INFO][3785] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b3edfa8dea300e40160885510f221aabc9dbf2ba81fafe7743367a028bf4969d" HandleID="k8s-pod-network.b3edfa8dea300e40160885510f221aabc9dbf2ba81fafe7743367a028bf4969d" Workload="localhost-k8s-coredns--66bc5c9577--k659t-eth0" Mar 6 01:50:43.050723 containerd[1469]: 2026-03-06 01:50:41.982 [INFO][3785] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:50:43.050723 containerd[1469]: 2026-03-06 01:50:42.996 [INFO][3785] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:50:43.050723 containerd[1469]: 2026-03-06 01:50:43.021 [WARNING][3785] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b3edfa8dea300e40160885510f221aabc9dbf2ba81fafe7743367a028bf4969d" HandleID="k8s-pod-network.b3edfa8dea300e40160885510f221aabc9dbf2ba81fafe7743367a028bf4969d" Workload="localhost-k8s-coredns--66bc5c9577--k659t-eth0" Mar 6 01:50:43.050723 containerd[1469]: 2026-03-06 01:50:43.023 [INFO][3785] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b3edfa8dea300e40160885510f221aabc9dbf2ba81fafe7743367a028bf4969d" HandleID="k8s-pod-network.b3edfa8dea300e40160885510f221aabc9dbf2ba81fafe7743367a028bf4969d" Workload="localhost-k8s-coredns--66bc5c9577--k659t-eth0" Mar 6 01:50:43.050723 containerd[1469]: 2026-03-06 01:50:43.028 [INFO][3785] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:50:43.050723 containerd[1469]: 2026-03-06 01:50:43.045 [INFO][3744] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b3edfa8dea300e40160885510f221aabc9dbf2ba81fafe7743367a028bf4969d" Mar 6 01:50:43.052608 containerd[1469]: time="2026-03-06T01:50:43.052583472Z" level=info msg="TearDown network for sandbox \"b3edfa8dea300e40160885510f221aabc9dbf2ba81fafe7743367a028bf4969d\" successfully" Mar 6 01:50:43.053025 containerd[1469]: time="2026-03-06T01:50:43.053006560Z" level=info msg="StopPodSandbox for \"b3edfa8dea300e40160885510f221aabc9dbf2ba81fafe7743367a028bf4969d\" returns successfully" Mar 6 01:50:43.053699 containerd[1469]: 2026-03-06 01:50:41.488 [ERROR][3626] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 6 01:50:43.053699 containerd[1469]: 2026-03-06 01:50:41.525 [INFO][3626] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--8459ffd5ff--w4b5h-eth0 calico-apiserver-8459ffd5ff- calico-system 6664a973-0768-4772-9a16-4ab55bf393fa 909 0 2026-03-06 01:50:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8459ffd5ff projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-8459ffd5ff-w4b5h eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calibe659d715cd [] [] }} ContainerID="32fb65c7a129d0d578bc5c5c7a2abe8b929ce5fad611fbc673e5e1afd2529ccf" Namespace="calico-system" Pod="calico-apiserver-8459ffd5ff-w4b5h" WorkloadEndpoint="localhost-k8s-calico--apiserver--8459ffd5ff--w4b5h-" Mar 6 01:50:43.053699 containerd[1469]: 2026-03-06 01:50:41.525 [INFO][3626] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="32fb65c7a129d0d578bc5c5c7a2abe8b929ce5fad611fbc673e5e1afd2529ccf" Namespace="calico-system" Pod="calico-apiserver-8459ffd5ff-w4b5h" WorkloadEndpoint="localhost-k8s-calico--apiserver--8459ffd5ff--w4b5h-eth0" Mar 6 01:50:43.053699 containerd[1469]: 2026-03-06 01:50:41.684 [INFO][3674] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="32fb65c7a129d0d578bc5c5c7a2abe8b929ce5fad611fbc673e5e1afd2529ccf" HandleID="k8s-pod-network.32fb65c7a129d0d578bc5c5c7a2abe8b929ce5fad611fbc673e5e1afd2529ccf" Workload="localhost-k8s-calico--apiserver--8459ffd5ff--w4b5h-eth0" Mar 6 01:50:43.053699 containerd[1469]: 2026-03-06 01:50:41.715 [INFO][3674] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="32fb65c7a129d0d578bc5c5c7a2abe8b929ce5fad611fbc673e5e1afd2529ccf" HandleID="k8s-pod-network.32fb65c7a129d0d578bc5c5c7a2abe8b929ce5fad611fbc673e5e1afd2529ccf" Workload="localhost-k8s-calico--apiserver--8459ffd5ff--w4b5h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00059c270), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-8459ffd5ff-w4b5h", "timestamp":"2026-03-06 01:50:41.684594724 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004a6c60)} Mar 6 01:50:43.053699 containerd[1469]: 2026-03-06 01:50:41.716 [INFO][3674] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:50:43.053699 containerd[1469]: 2026-03-06 01:50:41.772 [INFO][3674] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:50:43.053699 containerd[1469]: 2026-03-06 01:50:41.773 [INFO][3674] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 6 01:50:43.053699 containerd[1469]: 2026-03-06 01:50:41.778 [INFO][3674] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.32fb65c7a129d0d578bc5c5c7a2abe8b929ce5fad611fbc673e5e1afd2529ccf" host="localhost" Mar 6 01:50:43.053699 containerd[1469]: 2026-03-06 01:50:41.844 [INFO][3674] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 6 01:50:43.053699 containerd[1469]: 2026-03-06 01:50:41.871 [INFO][3674] ipam/ipam.go 1965: Failed to create global IPAM config; another node got there first. Mar 6 01:50:43.053699 containerd[1469]: 2026-03-06 01:50:42.883 [INFO][3674] ipam/ipam.go 558: Ran out of existing affine blocks for host host="localhost" Mar 6 01:50:43.053699 containerd[1469]: 2026-03-06 01:50:42.889 [INFO][3674] ipam/ipam.go 575: Tried all affine blocks. Looking for an affine block with space, or a new unclaimed block host="localhost" Mar 6 01:50:43.053699 containerd[1469]: 2026-03-06 01:50:42.893 [INFO][3674] ipam/ipam.go 588: Found unclaimed block in 4.486077ms host="localhost" subnet=192.168.88.128/26 Mar 6 01:50:43.053699 containerd[1469]: 2026-03-06 01:50:42.893 [INFO][3674] ipam/ipam_block_reader_writer.go 175: Trying to create affinity in pending state host="localhost" subnet=192.168.88.128/26 Mar 6 01:50:43.053699 containerd[1469]: 2026-03-06 01:50:42.900 [INFO][3674] ipam/ipam_block_reader_writer.go 186: Block affinity already exists, getting existing affinity host="localhost" subnet=192.168.88.128/26 Mar 6 01:50:43.053699 containerd[1469]: 2026-03-06 01:50:42.903 [INFO][3674] ipam/ipam_block_reader_writer.go 194: Got existing affinity host="localhost" subnet=192.168.88.128/26 Mar 6 01:50:43.053699 containerd[1469]: 2026-03-06 01:50:42.904 [INFO][3674] ipam/ipam_block_reader_writer.go 202: Existing affinity is already confirmed host="localhost" subnet=192.168.88.128/26 Mar 6 01:50:43.053699 containerd[1469]: 2026-03-06 01:50:42.904 [INFO][3674] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 6 01:50:43.053699 containerd[1469]: 2026-03-06 01:50:42.908 [INFO][3674] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 6 01:50:43.053699 containerd[1469]: 2026-03-06 01:50:42.908 [INFO][3674] ipam/ipam.go 623: Block '192.168.88.128/26' has 63 free ips which is more than 1 ips required. host="localhost" subnet=192.168.88.128/26 Mar 6 01:50:43.053699 containerd[1469]: 2026-03-06 01:50:42.908 [INFO][3674] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.32fb65c7a129d0d578bc5c5c7a2abe8b929ce5fad611fbc673e5e1afd2529ccf" host="localhost" Mar 6 01:50:43.054742 containerd[1469]: 2026-03-06 01:50:42.911 [INFO][3674] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.32fb65c7a129d0d578bc5c5c7a2abe8b929ce5fad611fbc673e5e1afd2529ccf Mar 6 01:50:43.054742 containerd[1469]: 2026-03-06 01:50:42.920 [INFO][3674] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.32fb65c7a129d0d578bc5c5c7a2abe8b929ce5fad611fbc673e5e1afd2529ccf" host="localhost" Mar 6 01:50:43.054742 containerd[1469]: 2026-03-06 01:50:42.928 [INFO][3674] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.32fb65c7a129d0d578bc5c5c7a2abe8b929ce5fad611fbc673e5e1afd2529ccf" host="localhost" Mar 6 01:50:43.054742 containerd[1469]: 2026-03-06 01:50:42.929 [INFO][3674] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.32fb65c7a129d0d578bc5c5c7a2abe8b929ce5fad611fbc673e5e1afd2529ccf" host="localhost" Mar 6 01:50:43.054742 containerd[1469]: 2026-03-06 01:50:42.929 [INFO][3674] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:50:43.054742 containerd[1469]: 2026-03-06 01:50:42.929 [INFO][3674] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="32fb65c7a129d0d578bc5c5c7a2abe8b929ce5fad611fbc673e5e1afd2529ccf" HandleID="k8s-pod-network.32fb65c7a129d0d578bc5c5c7a2abe8b929ce5fad611fbc673e5e1afd2529ccf" Workload="localhost-k8s-calico--apiserver--8459ffd5ff--w4b5h-eth0" Mar 6 01:50:43.054742 containerd[1469]: 2026-03-06 01:50:42.940 [INFO][3626] cni-plugin/k8s.go 418: Populated endpoint ContainerID="32fb65c7a129d0d578bc5c5c7a2abe8b929ce5fad611fbc673e5e1afd2529ccf" Namespace="calico-system" Pod="calico-apiserver-8459ffd5ff-w4b5h" WorkloadEndpoint="localhost-k8s-calico--apiserver--8459ffd5ff--w4b5h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8459ffd5ff--w4b5h-eth0", GenerateName:"calico-apiserver-8459ffd5ff-", Namespace:"calico-system", SelfLink:"", UID:"6664a973-0768-4772-9a16-4ab55bf393fa", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 50, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8459ffd5ff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-8459ffd5ff-w4b5h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibe659d715cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:50:43.054742 containerd[1469]: 2026-03-06 01:50:42.940 [INFO][3626] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="32fb65c7a129d0d578bc5c5c7a2abe8b929ce5fad611fbc673e5e1afd2529ccf" Namespace="calico-system" Pod="calico-apiserver-8459ffd5ff-w4b5h" WorkloadEndpoint="localhost-k8s-calico--apiserver--8459ffd5ff--w4b5h-eth0" Mar 6 01:50:43.054742 containerd[1469]: 2026-03-06 01:50:42.940 [INFO][3626] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibe659d715cd ContainerID="32fb65c7a129d0d578bc5c5c7a2abe8b929ce5fad611fbc673e5e1afd2529ccf" Namespace="calico-system" Pod="calico-apiserver-8459ffd5ff-w4b5h" WorkloadEndpoint="localhost-k8s-calico--apiserver--8459ffd5ff--w4b5h-eth0" Mar 6 01:50:43.054742 containerd[1469]: 2026-03-06 01:50:42.977 [INFO][3626] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="32fb65c7a129d0d578bc5c5c7a2abe8b929ce5fad611fbc673e5e1afd2529ccf" Namespace="calico-system" Pod="calico-apiserver-8459ffd5ff-w4b5h" WorkloadEndpoint="localhost-k8s-calico--apiserver--8459ffd5ff--w4b5h-eth0" Mar 6 01:50:43.055628 containerd[1469]: 2026-03-06 01:50:42.985 [INFO][3626] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="32fb65c7a129d0d578bc5c5c7a2abe8b929ce5fad611fbc673e5e1afd2529ccf" Namespace="calico-system" Pod="calico-apiserver-8459ffd5ff-w4b5h" WorkloadEndpoint="localhost-k8s-calico--apiserver--8459ffd5ff--w4b5h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8459ffd5ff--w4b5h-eth0", GenerateName:"calico-apiserver-8459ffd5ff-", Namespace:"calico-system", SelfLink:"", UID:"6664a973-0768-4772-9a16-4ab55bf393fa", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 50, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8459ffd5ff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"32fb65c7a129d0d578bc5c5c7a2abe8b929ce5fad611fbc673e5e1afd2529ccf", Pod:"calico-apiserver-8459ffd5ff-w4b5h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibe659d715cd", MAC:"da:71:bf:d7:c8:4c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:50:43.055628 containerd[1469]: 2026-03-06 01:50:43.040 [INFO][3626] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="32fb65c7a129d0d578bc5c5c7a2abe8b929ce5fad611fbc673e5e1afd2529ccf" Namespace="calico-system" Pod="calico-apiserver-8459ffd5ff-w4b5h" WorkloadEndpoint="localhost-k8s-calico--apiserver--8459ffd5ff--w4b5h-eth0" Mar 6 01:50:43.065921 kubelet[2576]: E0306 01:50:43.065665 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:50:43.078464 containerd[1469]: time="2026-03-06T01:50:43.077582731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-k659t,Uid:8ed54e90-4fd2-4aae-b446-8c5b8cd922cd,Namespace:kube-system,Attempt:1,}" Mar 6 01:50:43.098795 containerd[1469]: 2026-03-06 01:50:41.950 [INFO][3745] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="818b1552eda146362700bf8149049d29e41f1775a68f72ce65d131dc165f920b" Mar 6 01:50:43.098795 containerd[1469]: 2026-03-06 01:50:41.950 [INFO][3745] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="818b1552eda146362700bf8149049d29e41f1775a68f72ce65d131dc165f920b" iface="eth0" netns="/var/run/netns/cni-7a6ca989-acbd-f749-c524-1a983b57286a" Mar 6 01:50:43.098795 containerd[1469]: 2026-03-06 01:50:41.951 [INFO][3745] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="818b1552eda146362700bf8149049d29e41f1775a68f72ce65d131dc165f920b" iface="eth0" netns="/var/run/netns/cni-7a6ca989-acbd-f749-c524-1a983b57286a" Mar 6 01:50:43.098795 containerd[1469]: 2026-03-06 01:50:41.951 [INFO][3745] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="818b1552eda146362700bf8149049d29e41f1775a68f72ce65d131dc165f920b" iface="eth0" netns="/var/run/netns/cni-7a6ca989-acbd-f749-c524-1a983b57286a" Mar 6 01:50:43.098795 containerd[1469]: 2026-03-06 01:50:41.951 [INFO][3745] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="818b1552eda146362700bf8149049d29e41f1775a68f72ce65d131dc165f920b" Mar 6 01:50:43.098795 containerd[1469]: 2026-03-06 01:50:41.952 [INFO][3745] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="818b1552eda146362700bf8149049d29e41f1775a68f72ce65d131dc165f920b" Mar 6 01:50:43.098795 containerd[1469]: 2026-03-06 01:50:42.018 [INFO][3793] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="818b1552eda146362700bf8149049d29e41f1775a68f72ce65d131dc165f920b" HandleID="k8s-pod-network.818b1552eda146362700bf8149049d29e41f1775a68f72ce65d131dc165f920b" Workload="localhost-k8s-calico--apiserver--8459ffd5ff--d94vx-eth0" Mar 6 01:50:43.098795 containerd[1469]: 2026-03-06 01:50:42.019 [INFO][3793] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:50:43.098795 containerd[1469]: 2026-03-06 01:50:43.029 [INFO][3793] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:50:43.098795 containerd[1469]: 2026-03-06 01:50:43.055 [WARNING][3793] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="818b1552eda146362700bf8149049d29e41f1775a68f72ce65d131dc165f920b" HandleID="k8s-pod-network.818b1552eda146362700bf8149049d29e41f1775a68f72ce65d131dc165f920b" Workload="localhost-k8s-calico--apiserver--8459ffd5ff--d94vx-eth0" Mar 6 01:50:43.098795 containerd[1469]: 2026-03-06 01:50:43.055 [INFO][3793] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="818b1552eda146362700bf8149049d29e41f1775a68f72ce65d131dc165f920b" HandleID="k8s-pod-network.818b1552eda146362700bf8149049d29e41f1775a68f72ce65d131dc165f920b" Workload="localhost-k8s-calico--apiserver--8459ffd5ff--d94vx-eth0" Mar 6 01:50:43.098795 containerd[1469]: 2026-03-06 01:50:43.059 [INFO][3793] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:50:43.098795 containerd[1469]: 2026-03-06 01:50:43.087 [INFO][3745] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="818b1552eda146362700bf8149049d29e41f1775a68f72ce65d131dc165f920b" Mar 6 01:50:43.101593 containerd[1469]: time="2026-03-06T01:50:43.100870148Z" level=info msg="TearDown network for sandbox \"818b1552eda146362700bf8149049d29e41f1775a68f72ce65d131dc165f920b\" successfully" Mar 6 01:50:43.101593 containerd[1469]: time="2026-03-06T01:50:43.100912047Z" level=info msg="StopPodSandbox for \"818b1552eda146362700bf8149049d29e41f1775a68f72ce65d131dc165f920b\" returns successfully" Mar 6 01:50:43.110887 containerd[1469]: time="2026-03-06T01:50:43.110841492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8459ffd5ff-d94vx,Uid:36c75156-d099-419f-a2ec-938f6d71a9bf,Namespace:calico-system,Attempt:1,}" Mar 6 01:50:43.115067 containerd[1469]: time="2026-03-06T01:50:43.111854116Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:50:43.115067 containerd[1469]: time="2026-03-06T01:50:43.111924115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:50:43.115067 containerd[1469]: time="2026-03-06T01:50:43.111957908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:50:43.115067 containerd[1469]: time="2026-03-06T01:50:43.112127085Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:50:43.178835 systemd-networkd[1405]: calic7b5c3a5b01: Link UP Mar 6 01:50:43.180430 systemd-networkd[1405]: calic7b5c3a5b01: Gained carrier Mar 6 01:50:43.201418 systemd[1]: Started cri-containerd-32fb65c7a129d0d578bc5c5c7a2abe8b929ce5fad611fbc673e5e1afd2529ccf.scope - libcontainer container 32fb65c7a129d0d578bc5c5c7a2abe8b929ce5fad611fbc673e5e1afd2529ccf. Mar 6 01:50:43.247456 containerd[1469]: 2026-03-06 01:50:41.987 [ERROR][3769] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 6 01:50:43.247456 containerd[1469]: 2026-03-06 01:50:42.011 [INFO][3769] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--cccfbd5cf--km8qm-eth0 goldmane-cccfbd5cf- calico-system 5b6282a8-6c40-45a0-9b20-38cca779da41 934 0 2026-03-06 01:50:22 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-cccfbd5cf-km8qm eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic7b5c3a5b01 [] [] }} ContainerID="e000568b760fec287620812a510a24995ffb52f18922157a8a77c7c3a3e24eed" Namespace="calico-system" Pod="goldmane-cccfbd5cf-km8qm" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--km8qm-" Mar 6 01:50:43.247456 containerd[1469]: 2026-03-06 01:50:42.011 [INFO][3769] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e000568b760fec287620812a510a24995ffb52f18922157a8a77c7c3a3e24eed" Namespace="calico-system" Pod="goldmane-cccfbd5cf-km8qm" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--km8qm-eth0" Mar 6 01:50:43.247456 containerd[1469]: 2026-03-06 01:50:42.068 [INFO][3809] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e000568b760fec287620812a510a24995ffb52f18922157a8a77c7c3a3e24eed" HandleID="k8s-pod-network.e000568b760fec287620812a510a24995ffb52f18922157a8a77c7c3a3e24eed" Workload="localhost-k8s-goldmane--cccfbd5cf--km8qm-eth0" Mar 6 01:50:43.247456 containerd[1469]: 2026-03-06 01:50:42.079 [INFO][3809] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e000568b760fec287620812a510a24995ffb52f18922157a8a77c7c3a3e24eed" HandleID="k8s-pod-network.e000568b760fec287620812a510a24995ffb52f18922157a8a77c7c3a3e24eed" Workload="localhost-k8s-goldmane--cccfbd5cf--km8qm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002efdc0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-cccfbd5cf-km8qm", "timestamp":"2026-03-06 01:50:42.068389911 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000143340)} Mar 6 01:50:43.247456 containerd[1469]: 2026-03-06 01:50:42.079 [INFO][3809] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:50:43.247456 containerd[1469]: 2026-03-06 01:50:43.061 [INFO][3809] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:50:43.247456 containerd[1469]: 2026-03-06 01:50:43.062 [INFO][3809] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 6 01:50:43.247456 containerd[1469]: 2026-03-06 01:50:43.068 [INFO][3809] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e000568b760fec287620812a510a24995ffb52f18922157a8a77c7c3a3e24eed" host="localhost" Mar 6 01:50:43.247456 containerd[1469]: 2026-03-06 01:50:43.082 [INFO][3809] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 6 01:50:43.247456 containerd[1469]: 2026-03-06 01:50:43.097 [INFO][3809] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 6 01:50:43.247456 containerd[1469]: 2026-03-06 01:50:43.107 [INFO][3809] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 6 01:50:43.247456 containerd[1469]: 2026-03-06 01:50:43.112 [INFO][3809] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 6 01:50:43.247456 containerd[1469]: 2026-03-06 01:50:43.115 [INFO][3809] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e000568b760fec287620812a510a24995ffb52f18922157a8a77c7c3a3e24eed" host="localhost" Mar 6 01:50:43.247456 containerd[1469]: 2026-03-06 01:50:43.123 [INFO][3809] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e000568b760fec287620812a510a24995ffb52f18922157a8a77c7c3a3e24eed Mar 6 01:50:43.247456 containerd[1469]: 2026-03-06 01:50:43.148 [INFO][3809] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e000568b760fec287620812a510a24995ffb52f18922157a8a77c7c3a3e24eed" host="localhost" Mar 6 01:50:43.247456 containerd[1469]: 2026-03-06 01:50:43.159 [INFO][3809] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.e000568b760fec287620812a510a24995ffb52f18922157a8a77c7c3a3e24eed" host="localhost" Mar 6 01:50:43.247456 containerd[1469]: 2026-03-06 01:50:43.159 [INFO][3809] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.e000568b760fec287620812a510a24995ffb52f18922157a8a77c7c3a3e24eed" host="localhost" Mar 6 01:50:43.247456 containerd[1469]: 2026-03-06 01:50:43.159 [INFO][3809] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:50:43.247456 containerd[1469]: 2026-03-06 01:50:43.159 [INFO][3809] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="e000568b760fec287620812a510a24995ffb52f18922157a8a77c7c3a3e24eed" HandleID="k8s-pod-network.e000568b760fec287620812a510a24995ffb52f18922157a8a77c7c3a3e24eed" Workload="localhost-k8s-goldmane--cccfbd5cf--km8qm-eth0" Mar 6 01:50:43.248078 containerd[1469]: 2026-03-06 01:50:43.171 [INFO][3769] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e000568b760fec287620812a510a24995ffb52f18922157a8a77c7c3a3e24eed" Namespace="calico-system" Pod="goldmane-cccfbd5cf-km8qm" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--km8qm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--cccfbd5cf--km8qm-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"5b6282a8-6c40-45a0-9b20-38cca779da41", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 50, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-cccfbd5cf-km8qm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic7b5c3a5b01", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:50:43.248078 containerd[1469]: 2026-03-06 01:50:43.171 [INFO][3769] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="e000568b760fec287620812a510a24995ffb52f18922157a8a77c7c3a3e24eed" Namespace="calico-system" Pod="goldmane-cccfbd5cf-km8qm" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--km8qm-eth0" Mar 6 01:50:43.248078 containerd[1469]: 2026-03-06 01:50:43.171 [INFO][3769] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic7b5c3a5b01 ContainerID="e000568b760fec287620812a510a24995ffb52f18922157a8a77c7c3a3e24eed" Namespace="calico-system" Pod="goldmane-cccfbd5cf-km8qm" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--km8qm-eth0" Mar 6 01:50:43.248078 containerd[1469]: 2026-03-06 01:50:43.181 [INFO][3769] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e000568b760fec287620812a510a24995ffb52f18922157a8a77c7c3a3e24eed" Namespace="calico-system" Pod="goldmane-cccfbd5cf-km8qm" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--km8qm-eth0" Mar 6 01:50:43.248078 containerd[1469]: 2026-03-06 01:50:43.185 [INFO][3769] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e000568b760fec287620812a510a24995ffb52f18922157a8a77c7c3a3e24eed" Namespace="calico-system" Pod="goldmane-cccfbd5cf-km8qm" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--km8qm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--cccfbd5cf--km8qm-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"5b6282a8-6c40-45a0-9b20-38cca779da41", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 50, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e000568b760fec287620812a510a24995ffb52f18922157a8a77c7c3a3e24eed", Pod:"goldmane-cccfbd5cf-km8qm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic7b5c3a5b01", MAC:"72:d3:33:f4:65:a9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:50:43.248078 containerd[1469]: 2026-03-06 01:50:43.206 [INFO][3769] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e000568b760fec287620812a510a24995ffb52f18922157a8a77c7c3a3e24eed" Namespace="calico-system" Pod="goldmane-cccfbd5cf-km8qm" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--km8qm-eth0" Mar 6 01:50:43.256763 systemd-networkd[1405]: calib4587e1bf93: Link UP Mar 6 01:50:43.258818 systemd-networkd[1405]: calib4587e1bf93: Gained carrier Mar 6 01:50:43.269115 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 01:50:43.321432 containerd[1469]: 2026-03-06 01:50:41.957 [ERROR][3754] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 6 01:50:43.321432 containerd[1469]: 2026-03-06 01:50:42.009 [INFO][3754] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6569c6d5b5--lvltr-eth0 calico-kube-controllers-6569c6d5b5- calico-system 1851d6a8-7f92-4eab-9dc3-e0fb2b763487 935 0 2026-03-06 01:50:22 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6569c6d5b5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6569c6d5b5-lvltr eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib4587e1bf93 [] [] }} ContainerID="db3a37f78e9cde1edc7a33852da24557935d030fbb02b7788a352834f362f632" Namespace="calico-system" Pod="calico-kube-controllers-6569c6d5b5-lvltr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6569c6d5b5--lvltr-" Mar 6 01:50:43.321432 containerd[1469]: 2026-03-06 01:50:42.009 [INFO][3754] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="db3a37f78e9cde1edc7a33852da24557935d030fbb02b7788a352834f362f632" Namespace="calico-system" Pod="calico-kube-controllers-6569c6d5b5-lvltr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6569c6d5b5--lvltr-eth0" Mar 6 01:50:43.321432 containerd[1469]: 2026-03-06 01:50:42.086 [INFO][3812] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="db3a37f78e9cde1edc7a33852da24557935d030fbb02b7788a352834f362f632" HandleID="k8s-pod-network.db3a37f78e9cde1edc7a33852da24557935d030fbb02b7788a352834f362f632" Workload="localhost-k8s-calico--kube--controllers--6569c6d5b5--lvltr-eth0" Mar 6 01:50:43.321432 containerd[1469]: 2026-03-06 01:50:42.096 [INFO][3812] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="db3a37f78e9cde1edc7a33852da24557935d030fbb02b7788a352834f362f632" HandleID="k8s-pod-network.db3a37f78e9cde1edc7a33852da24557935d030fbb02b7788a352834f362f632" Workload="localhost-k8s-calico--kube--controllers--6569c6d5b5--lvltr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fd9b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6569c6d5b5-lvltr", "timestamp":"2026-03-06 01:50:42.086739484 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000199600)} Mar 6 01:50:43.321432 containerd[1469]: 2026-03-06 01:50:42.096 [INFO][3812] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:50:43.321432 containerd[1469]: 2026-03-06 01:50:43.162 [INFO][3812] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:50:43.321432 containerd[1469]: 2026-03-06 01:50:43.163 [INFO][3812] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 6 01:50:43.321432 containerd[1469]: 2026-03-06 01:50:43.173 [INFO][3812] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.db3a37f78e9cde1edc7a33852da24557935d030fbb02b7788a352834f362f632" host="localhost" Mar 6 01:50:43.321432 containerd[1469]: 2026-03-06 01:50:43.196 [INFO][3812] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 6 01:50:43.321432 containerd[1469]: 2026-03-06 01:50:43.217 [INFO][3812] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 6 01:50:43.321432 containerd[1469]: 2026-03-06 01:50:43.221 [INFO][3812] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 6 01:50:43.321432 containerd[1469]: 2026-03-06 01:50:43.224 [INFO][3812] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 6 01:50:43.321432 containerd[1469]: 2026-03-06 01:50:43.224 [INFO][3812] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.db3a37f78e9cde1edc7a33852da24557935d030fbb02b7788a352834f362f632" host="localhost" Mar 6 01:50:43.321432 containerd[1469]: 2026-03-06 01:50:43.226 [INFO][3812] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.db3a37f78e9cde1edc7a33852da24557935d030fbb02b7788a352834f362f632 Mar 6 01:50:43.321432 containerd[1469]: 2026-03-06 01:50:43.231 [INFO][3812] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.db3a37f78e9cde1edc7a33852da24557935d030fbb02b7788a352834f362f632" host="localhost" Mar 6 01:50:43.321432 containerd[1469]: 2026-03-06 01:50:43.239 [INFO][3812] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.db3a37f78e9cde1edc7a33852da24557935d030fbb02b7788a352834f362f632" host="localhost" Mar 6 01:50:43.321432 containerd[1469]: 2026-03-06 01:50:43.240 [INFO][3812] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.db3a37f78e9cde1edc7a33852da24557935d030fbb02b7788a352834f362f632" host="localhost" Mar 6 01:50:43.321432 containerd[1469]: 2026-03-06 01:50:43.240 [INFO][3812] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:50:43.323320 containerd[1469]: 2026-03-06 01:50:43.240 [INFO][3812] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="db3a37f78e9cde1edc7a33852da24557935d030fbb02b7788a352834f362f632" HandleID="k8s-pod-network.db3a37f78e9cde1edc7a33852da24557935d030fbb02b7788a352834f362f632" Workload="localhost-k8s-calico--kube--controllers--6569c6d5b5--lvltr-eth0" Mar 6 01:50:43.323320 containerd[1469]: 2026-03-06 01:50:43.247 [INFO][3754] cni-plugin/k8s.go 418: Populated endpoint ContainerID="db3a37f78e9cde1edc7a33852da24557935d030fbb02b7788a352834f362f632" Namespace="calico-system" Pod="calico-kube-controllers-6569c6d5b5-lvltr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6569c6d5b5--lvltr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6569c6d5b5--lvltr-eth0", GenerateName:"calico-kube-controllers-6569c6d5b5-", Namespace:"calico-system", SelfLink:"", UID:"1851d6a8-7f92-4eab-9dc3-e0fb2b763487", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 50, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6569c6d5b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6569c6d5b5-lvltr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib4587e1bf93", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:50:43.323320 containerd[1469]: 2026-03-06 01:50:43.248 [INFO][3754] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="db3a37f78e9cde1edc7a33852da24557935d030fbb02b7788a352834f362f632" Namespace="calico-system" Pod="calico-kube-controllers-6569c6d5b5-lvltr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6569c6d5b5--lvltr-eth0" Mar 6 01:50:43.323320 containerd[1469]: 2026-03-06 01:50:43.248 [INFO][3754] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib4587e1bf93 ContainerID="db3a37f78e9cde1edc7a33852da24557935d030fbb02b7788a352834f362f632" Namespace="calico-system" Pod="calico-kube-controllers-6569c6d5b5-lvltr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6569c6d5b5--lvltr-eth0" Mar 6 01:50:43.323320 containerd[1469]: 2026-03-06 01:50:43.270 [INFO][3754] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="db3a37f78e9cde1edc7a33852da24557935d030fbb02b7788a352834f362f632" Namespace="calico-system" Pod="calico-kube-controllers-6569c6d5b5-lvltr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6569c6d5b5--lvltr-eth0" Mar 6 01:50:43.323635 containerd[1469]: 2026-03-06 01:50:43.277 [INFO][3754] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="db3a37f78e9cde1edc7a33852da24557935d030fbb02b7788a352834f362f632" Namespace="calico-system" Pod="calico-kube-controllers-6569c6d5b5-lvltr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6569c6d5b5--lvltr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6569c6d5b5--lvltr-eth0", GenerateName:"calico-kube-controllers-6569c6d5b5-", Namespace:"calico-system", SelfLink:"", UID:"1851d6a8-7f92-4eab-9dc3-e0fb2b763487", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 50, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6569c6d5b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"db3a37f78e9cde1edc7a33852da24557935d030fbb02b7788a352834f362f632", Pod:"calico-kube-controllers-6569c6d5b5-lvltr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib4587e1bf93", MAC:"7a:52:82:54:5f:07", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:50:43.323635 containerd[1469]: 2026-03-06 01:50:43.300 [INFO][3754] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="db3a37f78e9cde1edc7a33852da24557935d030fbb02b7788a352834f362f632" Namespace="calico-system" Pod="calico-kube-controllers-6569c6d5b5-lvltr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6569c6d5b5--lvltr-eth0" Mar 6 01:50:43.404038 containerd[1469]: time="2026-03-06T01:50:43.404003006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8459ffd5ff-w4b5h,Uid:6664a973-0768-4772-9a16-4ab55bf393fa,Namespace:calico-system,Attempt:0,} returns sandbox id \"32fb65c7a129d0d578bc5c5c7a2abe8b929ce5fad611fbc673e5e1afd2529ccf\"" Mar 6 01:50:43.406545 containerd[1469]: time="2026-03-06T01:50:43.406482354Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 6 01:50:43.422835 containerd[1469]: time="2026-03-06T01:50:43.422403077Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:50:43.422835 containerd[1469]: time="2026-03-06T01:50:43.422451417Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:50:43.422835 containerd[1469]: time="2026-03-06T01:50:43.422461966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:50:43.422835 containerd[1469]: time="2026-03-06T01:50:43.422600134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:50:43.468886 containerd[1469]: time="2026-03-06T01:50:43.468676360Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:50:43.468886 containerd[1469]: time="2026-03-06T01:50:43.468727276Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:50:43.468886 containerd[1469]: time="2026-03-06T01:50:43.468737675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:50:43.468886 containerd[1469]: time="2026-03-06T01:50:43.468812225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:50:43.503461 systemd[1]: Started cri-containerd-e000568b760fec287620812a510a24995ffb52f18922157a8a77c7c3a3e24eed.scope - libcontainer container e000568b760fec287620812a510a24995ffb52f18922157a8a77c7c3a3e24eed. Mar 6 01:50:43.512037 systemd-networkd[1405]: cali9bf82bdb396: Link UP Mar 6 01:50:43.514079 systemd-networkd[1405]: cali9bf82bdb396: Gained carrier Mar 6 01:50:43.543202 systemd[1]: run-netns-cni\x2d96cec84c\x2d3267\x2d5315\x2dd6f6\x2dd4c97b7d0bdd.mount: Deactivated successfully. Mar 6 01:50:43.545063 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8159ef1bded72c59accfe448e8a7b8f27db29d86d86a6c1a8bee68985750e151-shm.mount: Deactivated successfully. Mar 6 01:50:43.545148 systemd[1]: run-netns-cni\x2d61d53ec6\x2d4141\x2db11c\x2d8f8c\x2dd0d147d7ac03.mount: Deactivated successfully. Mar 6 01:50:43.545290 systemd[1]: run-netns-cni\x2d7a6ca989\x2dacbd\x2df749\x2dc524\x2d1a983b57286a.mount: Deactivated successfully. Mar 6 01:50:43.572967 systemd[1]: Started cri-containerd-db3a37f78e9cde1edc7a33852da24557935d030fbb02b7788a352834f362f632.scope - libcontainer container db3a37f78e9cde1edc7a33852da24557935d030fbb02b7788a352834f362f632. Mar 6 01:50:43.589173 containerd[1469]: 2026-03-06 01:50:42.876 [ERROR][3836] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 6 01:50:43.589173 containerd[1469]: 2026-03-06 01:50:42.895 [INFO][3836] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--pl2df-eth0 coredns-66bc5c9577- kube-system fc0d15f8-a18a-41b3-a55e-0af73e5a2dcb 933 0 2026-03-06 01:50:08 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-pl2df eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9bf82bdb396 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="4dac1e8dd91cc69347bd61d491b308921b6db8c9c710397145f86ff6c3e62b32" Namespace="kube-system" Pod="coredns-66bc5c9577-pl2df" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--pl2df-" Mar 6 01:50:43.589173 containerd[1469]: 2026-03-06 01:50:42.895 [INFO][3836] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4dac1e8dd91cc69347bd61d491b308921b6db8c9c710397145f86ff6c3e62b32" Namespace="kube-system" Pod="coredns-66bc5c9577-pl2df" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--pl2df-eth0" Mar 6 01:50:43.589173 containerd[1469]: 2026-03-06 01:50:42.984 [INFO][3853] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4dac1e8dd91cc69347bd61d491b308921b6db8c9c710397145f86ff6c3e62b32" HandleID="k8s-pod-network.4dac1e8dd91cc69347bd61d491b308921b6db8c9c710397145f86ff6c3e62b32" Workload="localhost-k8s-coredns--66bc5c9577--pl2df-eth0" Mar 6 01:50:43.589173 containerd[1469]: 2026-03-06 01:50:43.005 [INFO][3853] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="4dac1e8dd91cc69347bd61d491b308921b6db8c9c710397145f86ff6c3e62b32" HandleID="k8s-pod-network.4dac1e8dd91cc69347bd61d491b308921b6db8c9c710397145f86ff6c3e62b32" Workload="localhost-k8s-coredns--66bc5c9577--pl2df-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fdaf0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-pl2df", "timestamp":"2026-03-06 01:50:42.983985924 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000440dc0)} Mar 6 01:50:43.589173 containerd[1469]: 2026-03-06 01:50:43.005 [INFO][3853] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:50:43.589173 containerd[1469]: 2026-03-06 01:50:43.241 [INFO][3853] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:50:43.589173 containerd[1469]: 2026-03-06 01:50:43.241 [INFO][3853] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 6 01:50:43.589173 containerd[1469]: 2026-03-06 01:50:43.281 [INFO][3853] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.4dac1e8dd91cc69347bd61d491b308921b6db8c9c710397145f86ff6c3e62b32" host="localhost" Mar 6 01:50:43.589173 containerd[1469]: 2026-03-06 01:50:43.313 [INFO][3853] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 6 01:50:43.589173 containerd[1469]: 2026-03-06 01:50:43.333 [INFO][3853] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 6 01:50:43.589173 containerd[1469]: 2026-03-06 01:50:43.339 [INFO][3853] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 6 01:50:43.589173 containerd[1469]: 2026-03-06 01:50:43.364 [INFO][3853] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 6 01:50:43.589173 containerd[1469]: 2026-03-06 01:50:43.364 [INFO][3853] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4dac1e8dd91cc69347bd61d491b308921b6db8c9c710397145f86ff6c3e62b32" host="localhost" Mar 6 01:50:43.589173 containerd[1469]: 2026-03-06 01:50:43.375 [INFO][3853] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.4dac1e8dd91cc69347bd61d491b308921b6db8c9c710397145f86ff6c3e62b32 Mar 6 01:50:43.589173 containerd[1469]: 2026-03-06 01:50:43.389 [INFO][3853] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4dac1e8dd91cc69347bd61d491b308921b6db8c9c710397145f86ff6c3e62b32" host="localhost" Mar 6 01:50:43.589173 containerd[1469]: 2026-03-06 01:50:43.420 [INFO][3853] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.4dac1e8dd91cc69347bd61d491b308921b6db8c9c710397145f86ff6c3e62b32" host="localhost" Mar 6 01:50:43.589173 containerd[1469]: 2026-03-06 01:50:43.420 [INFO][3853] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.4dac1e8dd91cc69347bd61d491b308921b6db8c9c710397145f86ff6c3e62b32" host="localhost" Mar 6 01:50:43.589173 containerd[1469]: 2026-03-06 01:50:43.420 [INFO][3853] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:50:43.589173 containerd[1469]: 2026-03-06 01:50:43.420 [INFO][3853] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="4dac1e8dd91cc69347bd61d491b308921b6db8c9c710397145f86ff6c3e62b32" HandleID="k8s-pod-network.4dac1e8dd91cc69347bd61d491b308921b6db8c9c710397145f86ff6c3e62b32" Workload="localhost-k8s-coredns--66bc5c9577--pl2df-eth0" Mar 6 01:50:43.590904 containerd[1469]: 2026-03-06 01:50:43.496 [INFO][3836] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4dac1e8dd91cc69347bd61d491b308921b6db8c9c710397145f86ff6c3e62b32" Namespace="kube-system" Pod="coredns-66bc5c9577-pl2df" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--pl2df-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--pl2df-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"fc0d15f8-a18a-41b3-a55e-0af73e5a2dcb", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 50, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-pl2df", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9bf82bdb396", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:50:43.590904 containerd[1469]: 2026-03-06 01:50:43.496 [INFO][3836] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="4dac1e8dd91cc69347bd61d491b308921b6db8c9c710397145f86ff6c3e62b32" Namespace="kube-system" Pod="coredns-66bc5c9577-pl2df" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--pl2df-eth0" Mar 6 01:50:43.590904 containerd[1469]: 2026-03-06 01:50:43.496 [INFO][3836] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9bf82bdb396 ContainerID="4dac1e8dd91cc69347bd61d491b308921b6db8c9c710397145f86ff6c3e62b32" Namespace="kube-system" Pod="coredns-66bc5c9577-pl2df" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--pl2df-eth0" Mar 6 01:50:43.590904 containerd[1469]: 2026-03-06 01:50:43.516 [INFO][3836] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4dac1e8dd91cc69347bd61d491b308921b6db8c9c710397145f86ff6c3e62b32" Namespace="kube-system" Pod="coredns-66bc5c9577-pl2df" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--pl2df-eth0" Mar 6 01:50:43.592376 containerd[1469]: 2026-03-06 01:50:43.518 [INFO][3836] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4dac1e8dd91cc69347bd61d491b308921b6db8c9c710397145f86ff6c3e62b32" Namespace="kube-system" Pod="coredns-66bc5c9577-pl2df" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--pl2df-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--pl2df-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"fc0d15f8-a18a-41b3-a55e-0af73e5a2dcb", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 50, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4dac1e8dd91cc69347bd61d491b308921b6db8c9c710397145f86ff6c3e62b32", Pod:"coredns-66bc5c9577-pl2df", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9bf82bdb396", MAC:"ea:9b:43:0c:5f:1d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:50:43.592376 containerd[1469]: 2026-03-06 01:50:43.558 [INFO][3836] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4dac1e8dd91cc69347bd61d491b308921b6db8c9c710397145f86ff6c3e62b32" Namespace="kube-system" Pod="coredns-66bc5c9577-pl2df" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--pl2df-eth0" Mar 6 01:50:43.613711 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 01:50:43.711104 systemd-networkd[1405]: cali700c9b2467a: Link UP Mar 6 01:50:43.712804 systemd-networkd[1405]: cali700c9b2467a: Gained carrier Mar 6 01:50:43.721058 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 01:50:43.787490 containerd[1469]: time="2026-03-06T01:50:43.787356846Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:50:43.788487 containerd[1469]: time="2026-03-06T01:50:43.788426884Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:50:43.788707 containerd[1469]: time="2026-03-06T01:50:43.788669255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:50:43.789386 containerd[1469]: time="2026-03-06T01:50:43.789345087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:50:43.803554 containerd[1469]: 2026-03-06 01:50:43.319 [ERROR][3919] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 6 01:50:43.803554 containerd[1469]: 2026-03-06 01:50:43.366 [INFO][3919] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--k659t-eth0 coredns-66bc5c9577- kube-system 8ed54e90-4fd2-4aae-b446-8c5b8cd922cd 947 0 2026-03-06 01:50:08 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-k659t eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali700c9b2467a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="8774b4969c0f1d1fad189a336bc83cc0ae7dec5aabffd4b1dddae9cc14e68a68" Namespace="kube-system" Pod="coredns-66bc5c9577-k659t" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--k659t-" Mar 6 01:50:43.803554 containerd[1469]: 2026-03-06 01:50:43.366 [INFO][3919] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8774b4969c0f1d1fad189a336bc83cc0ae7dec5aabffd4b1dddae9cc14e68a68" Namespace="kube-system" Pod="coredns-66bc5c9577-k659t" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--k659t-eth0" Mar 6 01:50:43.803554 containerd[1469]: 2026-03-06 01:50:43.451 [INFO][4081] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8774b4969c0f1d1fad189a336bc83cc0ae7dec5aabffd4b1dddae9cc14e68a68" HandleID="k8s-pod-network.8774b4969c0f1d1fad189a336bc83cc0ae7dec5aabffd4b1dddae9cc14e68a68" Workload="localhost-k8s-coredns--66bc5c9577--k659t-eth0" Mar 6 01:50:43.803554 containerd[1469]: 2026-03-06 01:50:43.479 [INFO][4081] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="8774b4969c0f1d1fad189a336bc83cc0ae7dec5aabffd4b1dddae9cc14e68a68" HandleID="k8s-pod-network.8774b4969c0f1d1fad189a336bc83cc0ae7dec5aabffd4b1dddae9cc14e68a68" Workload="localhost-k8s-coredns--66bc5c9577--k659t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003dc8f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-k659t", "timestamp":"2026-03-06 01:50:43.451933763 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001a6840)} Mar 6 01:50:43.803554 containerd[1469]: 2026-03-06 01:50:43.479 [INFO][4081] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:50:43.803554 containerd[1469]: 2026-03-06 01:50:43.480 [INFO][4081] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:50:43.803554 containerd[1469]: 2026-03-06 01:50:43.480 [INFO][4081] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 6 01:50:43.803554 containerd[1469]: 2026-03-06 01:50:43.530 [INFO][4081] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.8774b4969c0f1d1fad189a336bc83cc0ae7dec5aabffd4b1dddae9cc14e68a68" host="localhost" Mar 6 01:50:43.803554 containerd[1469]: 2026-03-06 01:50:43.584 [INFO][4081] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 6 01:50:43.803554 containerd[1469]: 2026-03-06 01:50:43.600 [INFO][4081] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 6 01:50:43.803554 containerd[1469]: 2026-03-06 01:50:43.609 [INFO][4081] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 6 01:50:43.803554 containerd[1469]: 2026-03-06 01:50:43.625 [INFO][4081] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 6 01:50:43.803554 containerd[1469]: 2026-03-06 01:50:43.626 [INFO][4081] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8774b4969c0f1d1fad189a336bc83cc0ae7dec5aabffd4b1dddae9cc14e68a68" host="localhost" Mar 6 01:50:43.803554 containerd[1469]: 2026-03-06 01:50:43.634 [INFO][4081] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.8774b4969c0f1d1fad189a336bc83cc0ae7dec5aabffd4b1dddae9cc14e68a68 Mar 6 01:50:43.803554 containerd[1469]: 2026-03-06 01:50:43.646 [INFO][4081] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8774b4969c0f1d1fad189a336bc83cc0ae7dec5aabffd4b1dddae9cc14e68a68" host="localhost" Mar 6 01:50:43.803554 containerd[1469]: 2026-03-06 01:50:43.684 [INFO][4081] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.8774b4969c0f1d1fad189a336bc83cc0ae7dec5aabffd4b1dddae9cc14e68a68" host="localhost" Mar 6 01:50:43.803554 containerd[1469]: 2026-03-06 01:50:43.684 [INFO][4081] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.8774b4969c0f1d1fad189a336bc83cc0ae7dec5aabffd4b1dddae9cc14e68a68" host="localhost" Mar 6 01:50:43.803554 containerd[1469]: 2026-03-06 01:50:43.684 [INFO][4081] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:50:43.803554 containerd[1469]: 2026-03-06 01:50:43.684 [INFO][4081] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="8774b4969c0f1d1fad189a336bc83cc0ae7dec5aabffd4b1dddae9cc14e68a68" HandleID="k8s-pod-network.8774b4969c0f1d1fad189a336bc83cc0ae7dec5aabffd4b1dddae9cc14e68a68" Workload="localhost-k8s-coredns--66bc5c9577--k659t-eth0" Mar 6 01:50:43.804825 containerd[1469]: 2026-03-06 01:50:43.698 [INFO][3919] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8774b4969c0f1d1fad189a336bc83cc0ae7dec5aabffd4b1dddae9cc14e68a68" Namespace="kube-system" Pod="coredns-66bc5c9577-k659t" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--k659t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--k659t-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"8ed54e90-4fd2-4aae-b446-8c5b8cd922cd", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 50, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-k659t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali700c9b2467a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:50:43.804825 containerd[1469]: 2026-03-06 01:50:43.698 [INFO][3919] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="8774b4969c0f1d1fad189a336bc83cc0ae7dec5aabffd4b1dddae9cc14e68a68" Namespace="kube-system" Pod="coredns-66bc5c9577-k659t" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--k659t-eth0" Mar 6 01:50:43.804825 containerd[1469]: 2026-03-06 01:50:43.698 [INFO][3919] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali700c9b2467a ContainerID="8774b4969c0f1d1fad189a336bc83cc0ae7dec5aabffd4b1dddae9cc14e68a68" Namespace="kube-system" Pod="coredns-66bc5c9577-k659t" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--k659t-eth0" Mar 6 01:50:43.804825 containerd[1469]: 2026-03-06 01:50:43.720 [INFO][3919] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8774b4969c0f1d1fad189a336bc83cc0ae7dec5aabffd4b1dddae9cc14e68a68" Namespace="kube-system" Pod="coredns-66bc5c9577-k659t" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--k659t-eth0" Mar 6 01:50:43.805104 containerd[1469]: 2026-03-06 01:50:43.720 [INFO][3919] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8774b4969c0f1d1fad189a336bc83cc0ae7dec5aabffd4b1dddae9cc14e68a68" Namespace="kube-system" Pod="coredns-66bc5c9577-k659t" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--k659t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--k659t-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"8ed54e90-4fd2-4aae-b446-8c5b8cd922cd", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 50, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8774b4969c0f1d1fad189a336bc83cc0ae7dec5aabffd4b1dddae9cc14e68a68", Pod:"coredns-66bc5c9577-k659t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali700c9b2467a", MAC:"52:f9:d1:72:35:e8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:50:43.805104 containerd[1469]: 2026-03-06 01:50:43.754 [INFO][3919] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8774b4969c0f1d1fad189a336bc83cc0ae7dec5aabffd4b1dddae9cc14e68a68" Namespace="kube-system" Pod="coredns-66bc5c9577-k659t" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--k659t-eth0" Mar 6 01:50:43.822920 containerd[1469]: time="2026-03-06T01:50:43.822802524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-km8qm,Uid:5b6282a8-6c40-45a0-9b20-38cca779da41,Namespace:calico-system,Attempt:0,} returns sandbox id \"e000568b760fec287620812a510a24995ffb52f18922157a8a77c7c3a3e24eed\"" Mar 6 01:50:43.833055 containerd[1469]: time="2026-03-06T01:50:43.832782732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xv9jc,Uid:20ed6bd5-f019-43c9-934e-78717b2dba0c,Namespace:calico-system,Attempt:0,}" Mar 6 01:50:43.860824 systemd-networkd[1405]: calibc5ecd7fb88: Link UP Mar 6 01:50:43.861440 systemd-networkd[1405]: calibc5ecd7fb88: Gained carrier Mar 6 01:50:43.894459 systemd[1]: Started cri-containerd-4dac1e8dd91cc69347bd61d491b308921b6db8c9c710397145f86ff6c3e62b32.scope - libcontainer container 4dac1e8dd91cc69347bd61d491b308921b6db8c9c710397145f86ff6c3e62b32. Mar 6 01:50:43.898002 containerd[1469]: 2026-03-06 01:50:43.243 [ERROR][3954] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 6 01:50:43.898002 containerd[1469]: 2026-03-06 01:50:43.288 [INFO][3954] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--8459ffd5ff--d94vx-eth0 calico-apiserver-8459ffd5ff- calico-system 36c75156-d099-419f-a2ec-938f6d71a9bf 950 0 2026-03-06 01:50:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8459ffd5ff projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-8459ffd5ff-d94vx eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calibc5ecd7fb88 [] [] }} ContainerID="daadc0b84de9403e3351f571c94af1c1b2078f56cdc3b7d830b13ca400ebd68e" Namespace="calico-system" Pod="calico-apiserver-8459ffd5ff-d94vx" WorkloadEndpoint="localhost-k8s-calico--apiserver--8459ffd5ff--d94vx-" Mar 6 01:50:43.898002 containerd[1469]: 2026-03-06 01:50:43.288 [INFO][3954] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="daadc0b84de9403e3351f571c94af1c1b2078f56cdc3b7d830b13ca400ebd68e" Namespace="calico-system" Pod="calico-apiserver-8459ffd5ff-d94vx" WorkloadEndpoint="localhost-k8s-calico--apiserver--8459ffd5ff--d94vx-eth0" Mar 6 01:50:43.898002 containerd[1469]: 2026-03-06 01:50:43.466 [INFO][4019] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="daadc0b84de9403e3351f571c94af1c1b2078f56cdc3b7d830b13ca400ebd68e" HandleID="k8s-pod-network.daadc0b84de9403e3351f571c94af1c1b2078f56cdc3b7d830b13ca400ebd68e" Workload="localhost-k8s-calico--apiserver--8459ffd5ff--d94vx-eth0" Mar 6 01:50:43.898002 containerd[1469]: 2026-03-06 01:50:43.572 [INFO][4019] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="daadc0b84de9403e3351f571c94af1c1b2078f56cdc3b7d830b13ca400ebd68e" HandleID="k8s-pod-network.daadc0b84de9403e3351f571c94af1c1b2078f56cdc3b7d830b13ca400ebd68e" Workload="localhost-k8s-calico--apiserver--8459ffd5ff--d94vx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000514f90), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-8459ffd5ff-d94vx", "timestamp":"2026-03-06 01:50:43.466176334 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000fe2c0)} Mar 6 01:50:43.898002 containerd[1469]: 2026-03-06 01:50:43.575 [INFO][4019] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:50:43.898002 containerd[1469]: 2026-03-06 01:50:43.684 [INFO][4019] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:50:43.898002 containerd[1469]: 2026-03-06 01:50:43.685 [INFO][4019] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 6 01:50:43.898002 containerd[1469]: 2026-03-06 01:50:43.696 [INFO][4019] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.daadc0b84de9403e3351f571c94af1c1b2078f56cdc3b7d830b13ca400ebd68e" host="localhost" Mar 6 01:50:43.898002 containerd[1469]: 2026-03-06 01:50:43.704 [INFO][4019] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 6 01:50:43.898002 containerd[1469]: 2026-03-06 01:50:43.728 [INFO][4019] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 6 01:50:43.898002 containerd[1469]: 2026-03-06 01:50:43.740 [INFO][4019] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 6 01:50:43.898002 containerd[1469]: 2026-03-06 01:50:43.756 [INFO][4019] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 6 01:50:43.898002 containerd[1469]: 2026-03-06 01:50:43.756 [INFO][4019] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.daadc0b84de9403e3351f571c94af1c1b2078f56cdc3b7d830b13ca400ebd68e" host="localhost" Mar 6 01:50:43.898002 containerd[1469]: 2026-03-06 01:50:43.762 [INFO][4019] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.daadc0b84de9403e3351f571c94af1c1b2078f56cdc3b7d830b13ca400ebd68e Mar 6 01:50:43.898002 containerd[1469]: 2026-03-06 01:50:43.788 [INFO][4019] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.daadc0b84de9403e3351f571c94af1c1b2078f56cdc3b7d830b13ca400ebd68e" host="localhost" Mar 6 01:50:43.898002 containerd[1469]: 2026-03-06 01:50:43.809 [INFO][4019] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.daadc0b84de9403e3351f571c94af1c1b2078f56cdc3b7d830b13ca400ebd68e" host="localhost" Mar 6 01:50:43.898002 containerd[1469]: 2026-03-06 01:50:43.809 [INFO][4019] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.daadc0b84de9403e3351f571c94af1c1b2078f56cdc3b7d830b13ca400ebd68e" host="localhost" Mar 6 01:50:43.898002 containerd[1469]: 2026-03-06 01:50:43.809 [INFO][4019] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:50:43.898733 containerd[1469]: 2026-03-06 01:50:43.809 [INFO][4019] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="daadc0b84de9403e3351f571c94af1c1b2078f56cdc3b7d830b13ca400ebd68e" HandleID="k8s-pod-network.daadc0b84de9403e3351f571c94af1c1b2078f56cdc3b7d830b13ca400ebd68e" Workload="localhost-k8s-calico--apiserver--8459ffd5ff--d94vx-eth0" Mar 6 01:50:43.898733 containerd[1469]: 2026-03-06 01:50:43.843 [INFO][3954] cni-plugin/k8s.go 418: Populated endpoint ContainerID="daadc0b84de9403e3351f571c94af1c1b2078f56cdc3b7d830b13ca400ebd68e" Namespace="calico-system" Pod="calico-apiserver-8459ffd5ff-d94vx" WorkloadEndpoint="localhost-k8s-calico--apiserver--8459ffd5ff--d94vx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8459ffd5ff--d94vx-eth0", GenerateName:"calico-apiserver-8459ffd5ff-", Namespace:"calico-system", SelfLink:"", UID:"36c75156-d099-419f-a2ec-938f6d71a9bf", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 50, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8459ffd5ff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-8459ffd5ff-d94vx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibc5ecd7fb88", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:50:43.898733 containerd[1469]: 2026-03-06 01:50:43.845 [INFO][3954] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="daadc0b84de9403e3351f571c94af1c1b2078f56cdc3b7d830b13ca400ebd68e" Namespace="calico-system" Pod="calico-apiserver-8459ffd5ff-d94vx" WorkloadEndpoint="localhost-k8s-calico--apiserver--8459ffd5ff--d94vx-eth0" Mar 6 01:50:43.898733 containerd[1469]: 2026-03-06 01:50:43.845 [INFO][3954] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibc5ecd7fb88 ContainerID="daadc0b84de9403e3351f571c94af1c1b2078f56cdc3b7d830b13ca400ebd68e" Namespace="calico-system" Pod="calico-apiserver-8459ffd5ff-d94vx" WorkloadEndpoint="localhost-k8s-calico--apiserver--8459ffd5ff--d94vx-eth0" Mar 6 01:50:43.898733 containerd[1469]: 2026-03-06 01:50:43.860 [INFO][3954] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="daadc0b84de9403e3351f571c94af1c1b2078f56cdc3b7d830b13ca400ebd68e" Namespace="calico-system" Pod="calico-apiserver-8459ffd5ff-d94vx" WorkloadEndpoint="localhost-k8s-calico--apiserver--8459ffd5ff--d94vx-eth0" Mar 6 01:50:43.898733 containerd[1469]: 2026-03-06 01:50:43.861 [INFO][3954] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="daadc0b84de9403e3351f571c94af1c1b2078f56cdc3b7d830b13ca400ebd68e" Namespace="calico-system" Pod="calico-apiserver-8459ffd5ff-d94vx" WorkloadEndpoint="localhost-k8s-calico--apiserver--8459ffd5ff--d94vx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8459ffd5ff--d94vx-eth0", GenerateName:"calico-apiserver-8459ffd5ff-", Namespace:"calico-system", SelfLink:"", UID:"36c75156-d099-419f-a2ec-938f6d71a9bf", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 50, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8459ffd5ff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"daadc0b84de9403e3351f571c94af1c1b2078f56cdc3b7d830b13ca400ebd68e", Pod:"calico-apiserver-8459ffd5ff-d94vx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibc5ecd7fb88", MAC:"32:fa:8c:66:24:8b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:50:43.899084 containerd[1469]: 2026-03-06 01:50:43.876 [INFO][3954] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="daadc0b84de9403e3351f571c94af1c1b2078f56cdc3b7d830b13ca400ebd68e" Namespace="calico-system" Pod="calico-apiserver-8459ffd5ff-d94vx" WorkloadEndpoint="localhost-k8s-calico--apiserver--8459ffd5ff--d94vx-eth0" Mar 6 01:50:43.970914 containerd[1469]: time="2026-03-06T01:50:43.970008676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6569c6d5b5-lvltr,Uid:1851d6a8-7f92-4eab-9dc3-e0fb2b763487,Namespace:calico-system,Attempt:0,} returns sandbox id \"db3a37f78e9cde1edc7a33852da24557935d030fbb02b7788a352834f362f632\"" Mar 6 01:50:43.982684 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 01:50:43.985411 kubelet[2576]: I0306 01:50:43.983126 2576 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b21f2ee9-d154-41f0-994a-34e4bb6425e8-whisker-ca-bundle\") pod \"b21f2ee9-d154-41f0-994a-34e4bb6425e8\" (UID: \"b21f2ee9-d154-41f0-994a-34e4bb6425e8\") " Mar 6 01:50:43.985411 kubelet[2576]: I0306 01:50:43.983195 2576 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8plj5\" (UniqueName: \"kubernetes.io/projected/b21f2ee9-d154-41f0-994a-34e4bb6425e8-kube-api-access-8plj5\") pod \"b21f2ee9-d154-41f0-994a-34e4bb6425e8\" (UID: \"b21f2ee9-d154-41f0-994a-34e4bb6425e8\") " Mar 6 01:50:43.985411 kubelet[2576]: I0306 01:50:43.983358 2576 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/b21f2ee9-d154-41f0-994a-34e4bb6425e8-nginx-config\") pod \"b21f2ee9-d154-41f0-994a-34e4bb6425e8\" (UID: \"b21f2ee9-d154-41f0-994a-34e4bb6425e8\") " Mar 6 01:50:43.985411 kubelet[2576]: I0306 01:50:43.983395 2576 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b21f2ee9-d154-41f0-994a-34e4bb6425e8-whisker-backend-key-pair\") pod \"b21f2ee9-d154-41f0-994a-34e4bb6425e8\" (UID: \"b21f2ee9-d154-41f0-994a-34e4bb6425e8\") " Mar 6 01:50:43.985411 kubelet[2576]: I0306 01:50:43.984682 2576 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b21f2ee9-d154-41f0-994a-34e4bb6425e8-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "b21f2ee9-d154-41f0-994a-34e4bb6425e8" (UID: "b21f2ee9-d154-41f0-994a-34e4bb6425e8"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 6 01:50:43.988187 kubelet[2576]: I0306 01:50:43.988160 2576 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b21f2ee9-d154-41f0-994a-34e4bb6425e8-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "b21f2ee9-d154-41f0-994a-34e4bb6425e8" (UID: "b21f2ee9-d154-41f0-994a-34e4bb6425e8"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 6 01:50:44.008335 kubelet[2576]: I0306 01:50:44.008167 2576 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b21f2ee9-d154-41f0-994a-34e4bb6425e8-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "b21f2ee9-d154-41f0-994a-34e4bb6425e8" (UID: "b21f2ee9-d154-41f0-994a-34e4bb6425e8"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 6 01:50:44.008929 kubelet[2576]: I0306 01:50:44.008846 2576 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b21f2ee9-d154-41f0-994a-34e4bb6425e8-kube-api-access-8plj5" (OuterVolumeSpecName: "kube-api-access-8plj5") pod "b21f2ee9-d154-41f0-994a-34e4bb6425e8" (UID: "b21f2ee9-d154-41f0-994a-34e4bb6425e8"). InnerVolumeSpecName "kube-api-access-8plj5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 6 01:50:44.088629 kubelet[2576]: I0306 01:50:44.086587 2576 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/b21f2ee9-d154-41f0-994a-34e4bb6425e8-nginx-config\") on node \"localhost\" DevicePath \"\"" Mar 6 01:50:44.088629 kubelet[2576]: I0306 01:50:44.086670 2576 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b21f2ee9-d154-41f0-994a-34e4bb6425e8-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Mar 6 01:50:44.088629 kubelet[2576]: I0306 01:50:44.086690 2576 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b21f2ee9-d154-41f0-994a-34e4bb6425e8-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Mar 6 01:50:44.088629 kubelet[2576]: I0306 01:50:44.086704 2576 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8plj5\" (UniqueName: \"kubernetes.io/projected/b21f2ee9-d154-41f0-994a-34e4bb6425e8-kube-api-access-8plj5\") on node \"localhost\" DevicePath \"\"" Mar 6 01:50:44.097334 containerd[1469]: time="2026-03-06T01:50:44.097109466Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:50:44.097468 containerd[1469]: time="2026-03-06T01:50:44.097350486Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:50:44.097468 containerd[1469]: time="2026-03-06T01:50:44.097374901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:50:44.097653 containerd[1469]: time="2026-03-06T01:50:44.097582247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:50:44.104476 containerd[1469]: time="2026-03-06T01:50:44.098834308Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:50:44.104476 containerd[1469]: time="2026-03-06T01:50:44.098921471Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:50:44.104476 containerd[1469]: time="2026-03-06T01:50:44.098939564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:50:44.104476 containerd[1469]: time="2026-03-06T01:50:44.099062584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:50:44.179582 systemd[1]: Removed slice kubepods-besteffort-podb21f2ee9_d154_41f0_994a_34e4bb6425e8.slice - libcontainer container kubepods-besteffort-podb21f2ee9_d154_41f0_994a_34e4bb6425e8.slice. Mar 6 01:50:44.196631 systemd[1]: Started cri-containerd-daadc0b84de9403e3351f571c94af1c1b2078f56cdc3b7d830b13ca400ebd68e.scope - libcontainer container daadc0b84de9403e3351f571c94af1c1b2078f56cdc3b7d830b13ca400ebd68e. Mar 6 01:50:44.203105 containerd[1469]: time="2026-03-06T01:50:44.202784342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-pl2df,Uid:fc0d15f8-a18a-41b3-a55e-0af73e5a2dcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"4dac1e8dd91cc69347bd61d491b308921b6db8c9c710397145f86ff6c3e62b32\"" Mar 6 01:50:44.203204 systemd[1]: Started cri-containerd-8774b4969c0f1d1fad189a336bc83cc0ae7dec5aabffd4b1dddae9cc14e68a68.scope - libcontainer container 8774b4969c0f1d1fad189a336bc83cc0ae7dec5aabffd4b1dddae9cc14e68a68. Mar 6 01:50:44.217135 kubelet[2576]: E0306 01:50:44.216434 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:50:44.221603 kernel: calico-node[4016]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 6 01:50:44.238875 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 01:50:44.289307 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 01:50:44.298102 containerd[1469]: time="2026-03-06T01:50:44.297723018Z" level=info msg="CreateContainer within sandbox \"4dac1e8dd91cc69347bd61d491b308921b6db8c9c710397145f86ff6c3e62b32\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 6 01:50:44.317163 containerd[1469]: time="2026-03-06T01:50:44.317070813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-k659t,Uid:8ed54e90-4fd2-4aae-b446-8c5b8cd922cd,Namespace:kube-system,Attempt:1,} returns sandbox id \"8774b4969c0f1d1fad189a336bc83cc0ae7dec5aabffd4b1dddae9cc14e68a68\"" Mar 6 01:50:44.321919 kubelet[2576]: E0306 01:50:44.321047 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:50:44.339735 containerd[1469]: time="2026-03-06T01:50:44.338843415Z" level=info msg="CreateContainer within sandbox \"8774b4969c0f1d1fad189a336bc83cc0ae7dec5aabffd4b1dddae9cc14e68a68\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 6 01:50:44.406983 containerd[1469]: time="2026-03-06T01:50:44.406488067Z" level=info msg="CreateContainer within sandbox \"4dac1e8dd91cc69347bd61d491b308921b6db8c9c710397145f86ff6c3e62b32\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"df7a70940678f95f29e09a9c7248032bba73be30b07a914dd9078f090230bf50\"" Mar 6 01:50:44.422377 containerd[1469]: time="2026-03-06T01:50:44.421445749Z" level=info msg="StartContainer for \"df7a70940678f95f29e09a9c7248032bba73be30b07a914dd9078f090230bf50\"" Mar 6 01:50:44.426479 containerd[1469]: time="2026-03-06T01:50:44.425352583Z" level=info msg="CreateContainer within sandbox \"8774b4969c0f1d1fad189a336bc83cc0ae7dec5aabffd4b1dddae9cc14e68a68\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d0d8cf1b018db828a98f48051fd4800e8ed25efcb39a75841b6aaeb49b340169\"" Mar 6 01:50:44.430103 containerd[1469]: time="2026-03-06T01:50:44.429859839Z" level=info msg="StartContainer for \"d0d8cf1b018db828a98f48051fd4800e8ed25efcb39a75841b6aaeb49b340169\"" Mar 6 01:50:44.475686 systemd-networkd[1405]: calib4587e1bf93: Gained IPv6LL Mar 6 01:50:44.482602 containerd[1469]: time="2026-03-06T01:50:44.481957635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8459ffd5ff-d94vx,Uid:36c75156-d099-419f-a2ec-938f6d71a9bf,Namespace:calico-system,Attempt:1,} returns sandbox id \"daadc0b84de9403e3351f571c94af1c1b2078f56cdc3b7d830b13ca400ebd68e\"" Mar 6 01:50:44.533106 systemd[1]: var-lib-kubelet-pods-b21f2ee9\x2dd154\x2d41f0\x2d994a\x2d34e4bb6425e8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8plj5.mount: Deactivated successfully. Mar 6 01:50:44.533738 systemd[1]: var-lib-kubelet-pods-b21f2ee9\x2dd154\x2d41f0\x2d994a\x2d34e4bb6425e8-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 6 01:50:44.539723 systemd-networkd[1405]: calibe659d715cd: Gained IPv6LL Mar 6 01:50:44.548419 systemd[1]: Started cri-containerd-d0d8cf1b018db828a98f48051fd4800e8ed25efcb39a75841b6aaeb49b340169.scope - libcontainer container d0d8cf1b018db828a98f48051fd4800e8ed25efcb39a75841b6aaeb49b340169. Mar 6 01:50:44.586407 systemd[1]: Started cri-containerd-df7a70940678f95f29e09a9c7248032bba73be30b07a914dd9078f090230bf50.scope - libcontainer container df7a70940678f95f29e09a9c7248032bba73be30b07a914dd9078f090230bf50. Mar 6 01:50:44.616158 systemd-networkd[1405]: cali77fc011ec67: Link UP Mar 6 01:50:44.616579 systemd-networkd[1405]: cali77fc011ec67: Gained carrier Mar 6 01:50:44.677445 containerd[1469]: 2026-03-06 01:50:44.298 [INFO][4230] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--xv9jc-eth0 csi-node-driver- calico-system 20ed6bd5-f019-43c9-934e-78717b2dba0c 936 0 2026-03-06 01:50:22 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-xv9jc eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali77fc011ec67 [] [] }} ContainerID="e8ec1a6484fc0ee121bf90b37cf3d7a48908db58bd7f2cd45a3818d0ae61e8cc" Namespace="calico-system" Pod="csi-node-driver-xv9jc" WorkloadEndpoint="localhost-k8s-csi--node--driver--xv9jc-" Mar 6 01:50:44.677445 containerd[1469]: 2026-03-06 01:50:44.302 [INFO][4230] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e8ec1a6484fc0ee121bf90b37cf3d7a48908db58bd7f2cd45a3818d0ae61e8cc" Namespace="calico-system" Pod="csi-node-driver-xv9jc" WorkloadEndpoint="localhost-k8s-csi--node--driver--xv9jc-eth0" Mar 6 01:50:44.677445 containerd[1469]: 2026-03-06 01:50:44.457 [INFO][4354] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e8ec1a6484fc0ee121bf90b37cf3d7a48908db58bd7f2cd45a3818d0ae61e8cc" HandleID="k8s-pod-network.e8ec1a6484fc0ee121bf90b37cf3d7a48908db58bd7f2cd45a3818d0ae61e8cc" Workload="localhost-k8s-csi--node--driver--xv9jc-eth0" Mar 6 01:50:44.677445 containerd[1469]: 2026-03-06 01:50:44.476 [INFO][4354] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e8ec1a6484fc0ee121bf90b37cf3d7a48908db58bd7f2cd45a3818d0ae61e8cc" HandleID="k8s-pod-network.e8ec1a6484fc0ee121bf90b37cf3d7a48908db58bd7f2cd45a3818d0ae61e8cc" Workload="localhost-k8s-csi--node--driver--xv9jc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e6e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-xv9jc", "timestamp":"2026-03-06 01:50:44.457303226 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000198840)} Mar 6 01:50:44.677445 containerd[1469]: 2026-03-06 01:50:44.477 [INFO][4354] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:50:44.677445 containerd[1469]: 2026-03-06 01:50:44.477 [INFO][4354] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:50:44.677445 containerd[1469]: 2026-03-06 01:50:44.477 [INFO][4354] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 6 01:50:44.677445 containerd[1469]: 2026-03-06 01:50:44.489 [INFO][4354] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e8ec1a6484fc0ee121bf90b37cf3d7a48908db58bd7f2cd45a3818d0ae61e8cc" host="localhost" Mar 6 01:50:44.677445 containerd[1469]: 2026-03-06 01:50:44.500 [INFO][4354] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 6 01:50:44.677445 containerd[1469]: 2026-03-06 01:50:44.512 [INFO][4354] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 6 01:50:44.677445 containerd[1469]: 2026-03-06 01:50:44.517 [INFO][4354] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 6 01:50:44.677445 containerd[1469]: 2026-03-06 01:50:44.521 [INFO][4354] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 6 01:50:44.677445 containerd[1469]: 2026-03-06 01:50:44.521 [INFO][4354] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e8ec1a6484fc0ee121bf90b37cf3d7a48908db58bd7f2cd45a3818d0ae61e8cc" host="localhost" Mar 6 01:50:44.677445 containerd[1469]: 2026-03-06 01:50:44.524 [INFO][4354] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e8ec1a6484fc0ee121bf90b37cf3d7a48908db58bd7f2cd45a3818d0ae61e8cc Mar 6 01:50:44.677445 containerd[1469]: 2026-03-06 01:50:44.534 [INFO][4354] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e8ec1a6484fc0ee121bf90b37cf3d7a48908db58bd7f2cd45a3818d0ae61e8cc" host="localhost" Mar 6 01:50:44.677445 containerd[1469]: 2026-03-06 01:50:44.546 [INFO][4354] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.e8ec1a6484fc0ee121bf90b37cf3d7a48908db58bd7f2cd45a3818d0ae61e8cc" host="localhost" Mar 6 01:50:44.677445 containerd[1469]: 2026-03-06 01:50:44.546 [INFO][4354] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.e8ec1a6484fc0ee121bf90b37cf3d7a48908db58bd7f2cd45a3818d0ae61e8cc" host="localhost" Mar 6 01:50:44.677445 containerd[1469]: 2026-03-06 01:50:44.546 [INFO][4354] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:50:44.677445 containerd[1469]: 2026-03-06 01:50:44.546 [INFO][4354] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="e8ec1a6484fc0ee121bf90b37cf3d7a48908db58bd7f2cd45a3818d0ae61e8cc" HandleID="k8s-pod-network.e8ec1a6484fc0ee121bf90b37cf3d7a48908db58bd7f2cd45a3818d0ae61e8cc" Workload="localhost-k8s-csi--node--driver--xv9jc-eth0" Mar 6 01:50:44.679900 containerd[1469]: 2026-03-06 01:50:44.571 [INFO][4230] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e8ec1a6484fc0ee121bf90b37cf3d7a48908db58bd7f2cd45a3818d0ae61e8cc" Namespace="calico-system" Pod="csi-node-driver-xv9jc" WorkloadEndpoint="localhost-k8s-csi--node--driver--xv9jc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xv9jc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"20ed6bd5-f019-43c9-934e-78717b2dba0c", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 50, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-xv9jc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali77fc011ec67", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:50:44.679900 containerd[1469]: 2026-03-06 01:50:44.574 [INFO][4230] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="e8ec1a6484fc0ee121bf90b37cf3d7a48908db58bd7f2cd45a3818d0ae61e8cc" Namespace="calico-system" Pod="csi-node-driver-xv9jc" WorkloadEndpoint="localhost-k8s-csi--node--driver--xv9jc-eth0" Mar 6 01:50:44.679900 containerd[1469]: 2026-03-06 01:50:44.574 [INFO][4230] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali77fc011ec67 ContainerID="e8ec1a6484fc0ee121bf90b37cf3d7a48908db58bd7f2cd45a3818d0ae61e8cc" Namespace="calico-system" Pod="csi-node-driver-xv9jc" WorkloadEndpoint="localhost-k8s-csi--node--driver--xv9jc-eth0" Mar 6 01:50:44.679900 containerd[1469]: 2026-03-06 01:50:44.646 [INFO][4230] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e8ec1a6484fc0ee121bf90b37cf3d7a48908db58bd7f2cd45a3818d0ae61e8cc" Namespace="calico-system" Pod="csi-node-driver-xv9jc" WorkloadEndpoint="localhost-k8s-csi--node--driver--xv9jc-eth0" Mar 6 01:50:44.679900 containerd[1469]: 2026-03-06 01:50:44.648 [INFO][4230] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e8ec1a6484fc0ee121bf90b37cf3d7a48908db58bd7f2cd45a3818d0ae61e8cc" Namespace="calico-system" Pod="csi-node-driver-xv9jc" WorkloadEndpoint="localhost-k8s-csi--node--driver--xv9jc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xv9jc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"20ed6bd5-f019-43c9-934e-78717b2dba0c", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 50, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e8ec1a6484fc0ee121bf90b37cf3d7a48908db58bd7f2cd45a3818d0ae61e8cc", Pod:"csi-node-driver-xv9jc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali77fc011ec67", MAC:"0e:f2:5a:80:03:f5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:50:44.679900 containerd[1469]: 2026-03-06 01:50:44.665 [INFO][4230] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e8ec1a6484fc0ee121bf90b37cf3d7a48908db58bd7f2cd45a3818d0ae61e8cc" Namespace="calico-system" Pod="csi-node-driver-xv9jc" WorkloadEndpoint="localhost-k8s-csi--node--driver--xv9jc-eth0" Mar 6 01:50:44.731083 systemd-networkd[1405]: calic7b5c3a5b01: Gained IPv6LL Mar 6 01:50:44.760399 containerd[1469]: time="2026-03-06T01:50:44.760354641Z" level=info msg="StartContainer for \"df7a70940678f95f29e09a9c7248032bba73be30b07a914dd9078f090230bf50\" returns successfully" Mar 6 01:50:44.809007 containerd[1469]: time="2026-03-06T01:50:44.807294349Z" level=info msg="StartContainer for \"d0d8cf1b018db828a98f48051fd4800e8ed25efcb39a75841b6aaeb49b340169\" returns successfully" Mar 6 01:50:44.813775 containerd[1469]: time="2026-03-06T01:50:44.812379302Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:50:44.813775 containerd[1469]: time="2026-03-06T01:50:44.812665015Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:50:44.813775 containerd[1469]: time="2026-03-06T01:50:44.812709658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:50:44.813775 containerd[1469]: time="2026-03-06T01:50:44.812870107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:50:44.845449 kubelet[2576]: E0306 01:50:44.845415 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:50:44.872178 kubelet[2576]: I0306 01:50:44.869700 2576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-pl2df" podStartSLOduration=36.869680603 podStartE2EDuration="36.869680603s" podCreationTimestamp="2026-03-06 01:50:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 01:50:44.868142745 +0000 UTC m=+41.105711642" watchObservedRunningTime="2026-03-06 01:50:44.869680603 +0000 UTC m=+41.107249500" Mar 6 01:50:44.890891 kubelet[2576]: E0306 01:50:44.889641 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:50:44.922448 systemd[1]: Started cri-containerd-e8ec1a6484fc0ee121bf90b37cf3d7a48908db58bd7f2cd45a3818d0ae61e8cc.scope - libcontainer container e8ec1a6484fc0ee121bf90b37cf3d7a48908db58bd7f2cd45a3818d0ae61e8cc. Mar 6 01:50:44.985195 kubelet[2576]: I0306 01:50:44.985014 2576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-k659t" podStartSLOduration=36.984989852 podStartE2EDuration="36.984989852s" podCreationTimestamp="2026-03-06 01:50:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 01:50:44.955900278 +0000 UTC m=+41.193469206" watchObservedRunningTime="2026-03-06 01:50:44.984989852 +0000 UTC m=+41.222558750" Mar 6 01:50:45.112066 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 01:50:45.150751 systemd[1]: Created slice kubepods-besteffort-podf5a8075c_2d5f_43ad_9752_b4dcdc06c4a5.slice - libcontainer container kubepods-besteffort-podf5a8075c_2d5f_43ad_9752_b4dcdc06c4a5.slice. Mar 6 01:50:45.163677 containerd[1469]: time="2026-03-06T01:50:45.163400764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xv9jc,Uid:20ed6bd5-f019-43c9-934e-78717b2dba0c,Namespace:calico-system,Attempt:0,} returns sandbox id \"e8ec1a6484fc0ee121bf90b37cf3d7a48908db58bd7f2cd45a3818d0ae61e8cc\"" Mar 6 01:50:45.217297 kubelet[2576]: I0306 01:50:45.216768 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f5a8075c-2d5f-43ad-9752-b4dcdc06c4a5-whisker-backend-key-pair\") pod \"whisker-66cb699ccf-49w4k\" (UID: \"f5a8075c-2d5f-43ad-9752-b4dcdc06c4a5\") " pod="calico-system/whisker-66cb699ccf-49w4k" Mar 6 01:50:45.217297 kubelet[2576]: I0306 01:50:45.216817 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5a8075c-2d5f-43ad-9752-b4dcdc06c4a5-whisker-ca-bundle\") pod \"whisker-66cb699ccf-49w4k\" (UID: \"f5a8075c-2d5f-43ad-9752-b4dcdc06c4a5\") " pod="calico-system/whisker-66cb699ccf-49w4k" Mar 6 01:50:45.217297 kubelet[2576]: I0306 01:50:45.216839 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csqv9\" (UniqueName: \"kubernetes.io/projected/f5a8075c-2d5f-43ad-9752-b4dcdc06c4a5-kube-api-access-csqv9\") pod \"whisker-66cb699ccf-49w4k\" (UID: \"f5a8075c-2d5f-43ad-9752-b4dcdc06c4a5\") " pod="calico-system/whisker-66cb699ccf-49w4k" Mar 6 01:50:45.217297 kubelet[2576]: I0306 01:50:45.216862 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/f5a8075c-2d5f-43ad-9752-b4dcdc06c4a5-nginx-config\") pod \"whisker-66cb699ccf-49w4k\" (UID: \"f5a8075c-2d5f-43ad-9752-b4dcdc06c4a5\") " pod="calico-system/whisker-66cb699ccf-49w4k" Mar 6 01:50:45.242704 systemd-networkd[1405]: calibc5ecd7fb88: Gained IPv6LL Mar 6 01:50:45.306682 systemd-networkd[1405]: cali9bf82bdb396: Gained IPv6LL Mar 6 01:50:45.473083 containerd[1469]: time="2026-03-06T01:50:45.473031474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66cb699ccf-49w4k,Uid:f5a8075c-2d5f-43ad-9752-b4dcdc06c4a5,Namespace:calico-system,Attempt:0,}" Mar 6 01:50:45.563354 systemd-networkd[1405]: cali700c9b2467a: Gained IPv6LL Mar 6 01:50:45.611596 systemd-networkd[1405]: vxlan.calico: Link UP Mar 6 01:50:45.611640 systemd-networkd[1405]: vxlan.calico: Gained carrier Mar 6 01:50:45.746457 systemd-networkd[1405]: cali075e95b88ff: Link UP Mar 6 01:50:45.751360 systemd-networkd[1405]: cali075e95b88ff: Gained carrier Mar 6 01:50:45.755600 systemd-networkd[1405]: cali77fc011ec67: Gained IPv6LL Mar 6 01:50:45.779982 containerd[1469]: 2026-03-06 01:50:45.580 [INFO][4525] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--66cb699ccf--49w4k-eth0 whisker-66cb699ccf- calico-system f5a8075c-2d5f-43ad-9752-b4dcdc06c4a5 1022 0 2026-03-06 01:50:45 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:66cb699ccf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-66cb699ccf-49w4k eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali075e95b88ff [] [] }} ContainerID="58c3af453b3969b7bd2fbeafb448ec91cd88b7f965e1a0ba82bfad3d3e0bd9c2" Namespace="calico-system" Pod="whisker-66cb699ccf-49w4k" WorkloadEndpoint="localhost-k8s-whisker--66cb699ccf--49w4k-" Mar 6 01:50:45.779982 containerd[1469]: 2026-03-06 01:50:45.580 [INFO][4525] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="58c3af453b3969b7bd2fbeafb448ec91cd88b7f965e1a0ba82bfad3d3e0bd9c2" Namespace="calico-system" Pod="whisker-66cb699ccf-49w4k" WorkloadEndpoint="localhost-k8s-whisker--66cb699ccf--49w4k-eth0" Mar 6 01:50:45.779982 containerd[1469]: 2026-03-06 01:50:45.652 [INFO][4546] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="58c3af453b3969b7bd2fbeafb448ec91cd88b7f965e1a0ba82bfad3d3e0bd9c2" HandleID="k8s-pod-network.58c3af453b3969b7bd2fbeafb448ec91cd88b7f965e1a0ba82bfad3d3e0bd9c2" Workload="localhost-k8s-whisker--66cb699ccf--49w4k-eth0" Mar 6 01:50:45.779982 containerd[1469]: 2026-03-06 01:50:45.662 [INFO][4546] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="58c3af453b3969b7bd2fbeafb448ec91cd88b7f965e1a0ba82bfad3d3e0bd9c2" HandleID="k8s-pod-network.58c3af453b3969b7bd2fbeafb448ec91cd88b7f965e1a0ba82bfad3d3e0bd9c2" Workload="localhost-k8s-whisker--66cb699ccf--49w4k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a47a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-66cb699ccf-49w4k", "timestamp":"2026-03-06 01:50:45.652091948 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002c38c0)} Mar 6 01:50:45.779982 containerd[1469]: 2026-03-06 01:50:45.662 [INFO][4546] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:50:45.779982 containerd[1469]: 2026-03-06 01:50:45.662 [INFO][4546] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:50:45.779982 containerd[1469]: 2026-03-06 01:50:45.662 [INFO][4546] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 6 01:50:45.779982 containerd[1469]: 2026-03-06 01:50:45.668 [INFO][4546] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.58c3af453b3969b7bd2fbeafb448ec91cd88b7f965e1a0ba82bfad3d3e0bd9c2" host="localhost" Mar 6 01:50:45.779982 containerd[1469]: 2026-03-06 01:50:45.680 [INFO][4546] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 6 01:50:45.779982 containerd[1469]: 2026-03-06 01:50:45.694 [INFO][4546] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 6 01:50:45.779982 containerd[1469]: 2026-03-06 01:50:45.703 [INFO][4546] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 6 01:50:45.779982 containerd[1469]: 2026-03-06 01:50:45.708 [INFO][4546] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 6 01:50:45.779982 containerd[1469]: 2026-03-06 01:50:45.708 [INFO][4546] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.58c3af453b3969b7bd2fbeafb448ec91cd88b7f965e1a0ba82bfad3d3e0bd9c2" host="localhost" Mar 6 01:50:45.779982 containerd[1469]: 2026-03-06 01:50:45.712 [INFO][4546] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.58c3af453b3969b7bd2fbeafb448ec91cd88b7f965e1a0ba82bfad3d3e0bd9c2 Mar 6 01:50:45.779982 containerd[1469]: 2026-03-06 01:50:45.719 [INFO][4546] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.58c3af453b3969b7bd2fbeafb448ec91cd88b7f965e1a0ba82bfad3d3e0bd9c2" host="localhost" Mar 6 01:50:45.779982 containerd[1469]: 2026-03-06 01:50:45.730 [INFO][4546] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.58c3af453b3969b7bd2fbeafb448ec91cd88b7f965e1a0ba82bfad3d3e0bd9c2" host="localhost" Mar 6 01:50:45.779982 containerd[1469]: 2026-03-06 01:50:45.730 [INFO][4546] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.58c3af453b3969b7bd2fbeafb448ec91cd88b7f965e1a0ba82bfad3d3e0bd9c2" host="localhost" Mar 6 01:50:45.779982 containerd[1469]: 2026-03-06 01:50:45.731 [INFO][4546] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:50:45.779982 containerd[1469]: 2026-03-06 01:50:45.733 [INFO][4546] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="58c3af453b3969b7bd2fbeafb448ec91cd88b7f965e1a0ba82bfad3d3e0bd9c2" HandleID="k8s-pod-network.58c3af453b3969b7bd2fbeafb448ec91cd88b7f965e1a0ba82bfad3d3e0bd9c2" Workload="localhost-k8s-whisker--66cb699ccf--49w4k-eth0" Mar 6 01:50:45.781157 containerd[1469]: 2026-03-06 01:50:45.741 [INFO][4525] cni-plugin/k8s.go 418: Populated endpoint ContainerID="58c3af453b3969b7bd2fbeafb448ec91cd88b7f965e1a0ba82bfad3d3e0bd9c2" Namespace="calico-system" Pod="whisker-66cb699ccf-49w4k" WorkloadEndpoint="localhost-k8s-whisker--66cb699ccf--49w4k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--66cb699ccf--49w4k-eth0", GenerateName:"whisker-66cb699ccf-", Namespace:"calico-system", SelfLink:"", UID:"f5a8075c-2d5f-43ad-9752-b4dcdc06c4a5", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 50, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"66cb699ccf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-66cb699ccf-49w4k", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali075e95b88ff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:50:45.781157 containerd[1469]: 2026-03-06 01:50:45.741 [INFO][4525] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="58c3af453b3969b7bd2fbeafb448ec91cd88b7f965e1a0ba82bfad3d3e0bd9c2" Namespace="calico-system" Pod="whisker-66cb699ccf-49w4k" WorkloadEndpoint="localhost-k8s-whisker--66cb699ccf--49w4k-eth0" Mar 6 01:50:45.781157 containerd[1469]: 2026-03-06 01:50:45.742 [INFO][4525] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali075e95b88ff ContainerID="58c3af453b3969b7bd2fbeafb448ec91cd88b7f965e1a0ba82bfad3d3e0bd9c2" Namespace="calico-system" Pod="whisker-66cb699ccf-49w4k" WorkloadEndpoint="localhost-k8s-whisker--66cb699ccf--49w4k-eth0" Mar 6 01:50:45.781157 containerd[1469]: 2026-03-06 01:50:45.747 [INFO][4525] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="58c3af453b3969b7bd2fbeafb448ec91cd88b7f965e1a0ba82bfad3d3e0bd9c2" Namespace="calico-system" Pod="whisker-66cb699ccf-49w4k" WorkloadEndpoint="localhost-k8s-whisker--66cb699ccf--49w4k-eth0" Mar 6 01:50:45.781157 containerd[1469]: 2026-03-06 01:50:45.748 [INFO][4525] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="58c3af453b3969b7bd2fbeafb448ec91cd88b7f965e1a0ba82bfad3d3e0bd9c2" Namespace="calico-system" Pod="whisker-66cb699ccf-49w4k" WorkloadEndpoint="localhost-k8s-whisker--66cb699ccf--49w4k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--66cb699ccf--49w4k-eth0", GenerateName:"whisker-66cb699ccf-", Namespace:"calico-system", SelfLink:"", UID:"f5a8075c-2d5f-43ad-9752-b4dcdc06c4a5", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 50, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"66cb699ccf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"58c3af453b3969b7bd2fbeafb448ec91cd88b7f965e1a0ba82bfad3d3e0bd9c2", Pod:"whisker-66cb699ccf-49w4k", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali075e95b88ff", MAC:"fe:c1:4d:29:2b:fe", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:50:45.781157 containerd[1469]: 2026-03-06 01:50:45.768 [INFO][4525] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="58c3af453b3969b7bd2fbeafb448ec91cd88b7f965e1a0ba82bfad3d3e0bd9c2" Namespace="calico-system" Pod="whisker-66cb699ccf-49w4k" WorkloadEndpoint="localhost-k8s-whisker--66cb699ccf--49w4k-eth0" Mar 6 01:50:45.838846 containerd[1469]: time="2026-03-06T01:50:45.838314016Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:50:45.838846 containerd[1469]: time="2026-03-06T01:50:45.838361344Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:50:45.838846 containerd[1469]: time="2026-03-06T01:50:45.838373537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:50:45.838846 containerd[1469]: time="2026-03-06T01:50:45.838479875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:50:45.896158 kubelet[2576]: E0306 01:50:45.896093 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:50:45.897327 systemd[1]: Started cri-containerd-58c3af453b3969b7bd2fbeafb448ec91cd88b7f965e1a0ba82bfad3d3e0bd9c2.scope - libcontainer container 58c3af453b3969b7bd2fbeafb448ec91cd88b7f965e1a0ba82bfad3d3e0bd9c2. Mar 6 01:50:45.898007 kubelet[2576]: E0306 01:50:45.897881 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:50:46.018997 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 01:50:46.085467 containerd[1469]: time="2026-03-06T01:50:46.084812872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66cb699ccf-49w4k,Uid:f5a8075c-2d5f-43ad-9752-b4dcdc06c4a5,Namespace:calico-system,Attempt:0,} returns sandbox id \"58c3af453b3969b7bd2fbeafb448ec91cd88b7f965e1a0ba82bfad3d3e0bd9c2\"" Mar 6 01:50:46.136147 kubelet[2576]: I0306 01:50:46.136040 2576 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b21f2ee9-d154-41f0-994a-34e4bb6425e8" path="/var/lib/kubelet/pods/b21f2ee9-d154-41f0-994a-34e4bb6425e8/volumes" Mar 6 01:50:46.637986 containerd[1469]: time="2026-03-06T01:50:46.637836467Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:50:46.639835 containerd[1469]: time="2026-03-06T01:50:46.639630562Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 6 01:50:46.641700 containerd[1469]: time="2026-03-06T01:50:46.641638878Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:50:46.645923 containerd[1469]: time="2026-03-06T01:50:46.645818373Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:50:46.646937 containerd[1469]: time="2026-03-06T01:50:46.646847402Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 3.240238832s" Mar 6 01:50:46.646937 containerd[1469]: time="2026-03-06T01:50:46.646927742Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 6 01:50:46.648928 containerd[1469]: time="2026-03-06T01:50:46.648782141Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 6 01:50:46.654770 containerd[1469]: time="2026-03-06T01:50:46.654395043Z" level=info msg="CreateContainer within sandbox \"32fb65c7a129d0d578bc5c5c7a2abe8b929ce5fad611fbc673e5e1afd2529ccf\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 6 01:50:46.677765 containerd[1469]: time="2026-03-06T01:50:46.677583090Z" level=info msg="CreateContainer within sandbox \"32fb65c7a129d0d578bc5c5c7a2abe8b929ce5fad611fbc673e5e1afd2529ccf\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5d4edecf6873b6545f4b1bcae6c6d49247ba850f187efc516a062f5d7d774084\"" Mar 6 01:50:46.679081 containerd[1469]: time="2026-03-06T01:50:46.679041650Z" level=info msg="StartContainer for \"5d4edecf6873b6545f4b1bcae6c6d49247ba850f187efc516a062f5d7d774084\"" Mar 6 01:50:46.728623 systemd[1]: Started cri-containerd-5d4edecf6873b6545f4b1bcae6c6d49247ba850f187efc516a062f5d7d774084.scope - libcontainer container 5d4edecf6873b6545f4b1bcae6c6d49247ba850f187efc516a062f5d7d774084. Mar 6 01:50:46.802991 containerd[1469]: time="2026-03-06T01:50:46.802846275Z" level=info msg="StartContainer for \"5d4edecf6873b6545f4b1bcae6c6d49247ba850f187efc516a062f5d7d774084\" returns successfully" Mar 6 01:50:46.914380 kubelet[2576]: E0306 01:50:46.911955 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:50:46.914380 kubelet[2576]: E0306 01:50:46.913919 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:50:47.036946 systemd-networkd[1405]: cali075e95b88ff: Gained IPv6LL Mar 6 01:50:47.611437 systemd-networkd[1405]: vxlan.calico: Gained IPv6LL Mar 6 01:50:47.915800 kubelet[2576]: I0306 01:50:47.915593 2576 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 6 01:50:47.917279 kubelet[2576]: E0306 01:50:47.916952 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:50:48.027918 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3194911019.mount: Deactivated successfully. Mar 6 01:50:48.935054 containerd[1469]: time="2026-03-06T01:50:48.934913534Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:50:48.937046 containerd[1469]: time="2026-03-06T01:50:48.936968959Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 6 01:50:48.953374 containerd[1469]: time="2026-03-06T01:50:48.952849460Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:50:48.962303 containerd[1469]: time="2026-03-06T01:50:48.962194504Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:50:48.963348 containerd[1469]: time="2026-03-06T01:50:48.963200590Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 2.314355662s" Mar 6 01:50:48.963456 containerd[1469]: time="2026-03-06T01:50:48.963349017Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 6 01:50:48.965332 containerd[1469]: time="2026-03-06T01:50:48.965196281Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 6 01:50:48.970160 containerd[1469]: time="2026-03-06T01:50:48.970074479Z" level=info msg="CreateContainer within sandbox \"e000568b760fec287620812a510a24995ffb52f18922157a8a77c7c3a3e24eed\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 6 01:50:48.991135 containerd[1469]: time="2026-03-06T01:50:48.991041453Z" level=info msg="CreateContainer within sandbox \"e000568b760fec287620812a510a24995ffb52f18922157a8a77c7c3a3e24eed\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"e28a7f5e7c80eaaac2fbeb545a6a57f4ee18cd44ad28254ff1465375ff18f324\"" Mar 6 01:50:48.992898 containerd[1469]: time="2026-03-06T01:50:48.992743570Z" level=info msg="StartContainer for \"e28a7f5e7c80eaaac2fbeb545a6a57f4ee18cd44ad28254ff1465375ff18f324\"" Mar 6 01:50:49.056586 systemd[1]: Started cri-containerd-e28a7f5e7c80eaaac2fbeb545a6a57f4ee18cd44ad28254ff1465375ff18f324.scope - libcontainer container e28a7f5e7c80eaaac2fbeb545a6a57f4ee18cd44ad28254ff1465375ff18f324. Mar 6 01:50:49.152807 containerd[1469]: time="2026-03-06T01:50:49.152652810Z" level=info msg="StartContainer for \"e28a7f5e7c80eaaac2fbeb545a6a57f4ee18cd44ad28254ff1465375ff18f324\" returns successfully" Mar 6 01:50:49.940716 kubelet[2576]: I0306 01:50:49.940473 2576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-8459ffd5ff-w4b5h" podStartSLOduration=25.697822425 podStartE2EDuration="28.940452611s" podCreationTimestamp="2026-03-06 01:50:21 +0000 UTC" firstStartedPulling="2026-03-06 01:50:43.405961451 +0000 UTC m=+39.643530349" lastFinishedPulling="2026-03-06 01:50:46.648591638 +0000 UTC m=+42.886160535" observedRunningTime="2026-03-06 01:50:46.933144262 +0000 UTC m=+43.170713188" watchObservedRunningTime="2026-03-06 01:50:49.940452611 +0000 UTC m=+46.178021518" Mar 6 01:50:49.941420 kubelet[2576]: I0306 01:50:49.940867 2576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-km8qm" podStartSLOduration=22.808090486 podStartE2EDuration="27.940860571s" podCreationTimestamp="2026-03-06 01:50:22 +0000 UTC" firstStartedPulling="2026-03-06 01:50:43.832310941 +0000 UTC m=+40.069879838" lastFinishedPulling="2026-03-06 01:50:48.965081026 +0000 UTC m=+45.202649923" observedRunningTime="2026-03-06 01:50:49.940128616 +0000 UTC m=+46.177697513" watchObservedRunningTime="2026-03-06 01:50:49.940860571 +0000 UTC m=+46.178429469" Mar 6 01:50:51.560990 containerd[1469]: time="2026-03-06T01:50:51.560891411Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:50:51.561956 containerd[1469]: time="2026-03-06T01:50:51.561874184Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 6 01:50:51.576382 containerd[1469]: time="2026-03-06T01:50:51.576281015Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:50:51.579614 containerd[1469]: time="2026-03-06T01:50:51.579539652Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:50:51.580546 containerd[1469]: time="2026-03-06T01:50:51.580385322Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 2.615040194s" Mar 6 01:50:51.580546 containerd[1469]: time="2026-03-06T01:50:51.580483987Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 6 01:50:51.584196 containerd[1469]: time="2026-03-06T01:50:51.582726419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 6 01:50:51.609069 containerd[1469]: time="2026-03-06T01:50:51.608983379Z" level=info msg="CreateContainer within sandbox \"db3a37f78e9cde1edc7a33852da24557935d030fbb02b7788a352834f362f632\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 6 01:50:51.628686 containerd[1469]: time="2026-03-06T01:50:51.628555879Z" level=info msg="CreateContainer within sandbox \"db3a37f78e9cde1edc7a33852da24557935d030fbb02b7788a352834f362f632\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"76cd6646b72e2a3704b8b3103f95ae4af053371d3df59efef0c49303ebacc34a\"" Mar 6 01:50:51.632141 containerd[1469]: time="2026-03-06T01:50:51.630783564Z" level=info msg="StartContainer for \"76cd6646b72e2a3704b8b3103f95ae4af053371d3df59efef0c49303ebacc34a\"" Mar 6 01:50:51.712078 containerd[1469]: time="2026-03-06T01:50:51.711953550Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:50:51.713561 containerd[1469]: time="2026-03-06T01:50:51.713436134Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 6 01:50:51.716927 containerd[1469]: time="2026-03-06T01:50:51.716851458Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 133.990518ms" Mar 6 01:50:51.716927 containerd[1469]: time="2026-03-06T01:50:51.716887516Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 6 01:50:51.718654 containerd[1469]: time="2026-03-06T01:50:51.718606963Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 6 01:50:51.729912 containerd[1469]: time="2026-03-06T01:50:51.729743605Z" level=info msg="CreateContainer within sandbox \"daadc0b84de9403e3351f571c94af1c1b2078f56cdc3b7d830b13ca400ebd68e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 6 01:50:51.742628 systemd[1]: Started cri-containerd-76cd6646b72e2a3704b8b3103f95ae4af053371d3df59efef0c49303ebacc34a.scope - libcontainer container 76cd6646b72e2a3704b8b3103f95ae4af053371d3df59efef0c49303ebacc34a. Mar 6 01:50:51.761834 containerd[1469]: time="2026-03-06T01:50:51.761739600Z" level=info msg="CreateContainer within sandbox \"daadc0b84de9403e3351f571c94af1c1b2078f56cdc3b7d830b13ca400ebd68e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"56840c40405d90aa7a869c7c987ee43ead6aaa804fb9b4e5bd7928425e4c0963\"" Mar 6 01:50:51.762830 containerd[1469]: time="2026-03-06T01:50:51.762746026Z" level=info msg="StartContainer for \"56840c40405d90aa7a869c7c987ee43ead6aaa804fb9b4e5bd7928425e4c0963\"" Mar 6 01:50:51.814486 systemd[1]: Started cri-containerd-56840c40405d90aa7a869c7c987ee43ead6aaa804fb9b4e5bd7928425e4c0963.scope - libcontainer container 56840c40405d90aa7a869c7c987ee43ead6aaa804fb9b4e5bd7928425e4c0963. Mar 6 01:50:51.819070 containerd[1469]: time="2026-03-06T01:50:51.818951996Z" level=info msg="StartContainer for \"76cd6646b72e2a3704b8b3103f95ae4af053371d3df59efef0c49303ebacc34a\" returns successfully" Mar 6 01:50:51.884909 containerd[1469]: time="2026-03-06T01:50:51.884759514Z" level=info msg="StartContainer for \"56840c40405d90aa7a869c7c987ee43ead6aaa804fb9b4e5bd7928425e4c0963\" returns successfully" Mar 6 01:50:51.900459 kubelet[2576]: I0306 01:50:51.899633 2576 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 6 01:50:51.961874 kubelet[2576]: I0306 01:50:51.961754 2576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-8459ffd5ff-d94vx" podStartSLOduration=23.728196356 podStartE2EDuration="30.961741212s" podCreationTimestamp="2026-03-06 01:50:21 +0000 UTC" firstStartedPulling="2026-03-06 01:50:44.484908649 +0000 UTC m=+40.722477545" lastFinishedPulling="2026-03-06 01:50:51.718453505 +0000 UTC m=+47.956022401" observedRunningTime="2026-03-06 01:50:51.958093727 +0000 UTC m=+48.195662625" watchObservedRunningTime="2026-03-06 01:50:51.961741212 +0000 UTC m=+48.199310109" Mar 6 01:50:52.138139 kubelet[2576]: I0306 01:50:52.137913 2576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6569c6d5b5-lvltr" podStartSLOduration=22.536734992 podStartE2EDuration="30.137897388s" podCreationTimestamp="2026-03-06 01:50:22 +0000 UTC" firstStartedPulling="2026-03-06 01:50:43.980679862 +0000 UTC m=+40.218248789" lastFinishedPulling="2026-03-06 01:50:51.581842288 +0000 UTC m=+47.819411185" observedRunningTime="2026-03-06 01:50:51.984804915 +0000 UTC m=+48.222373862" watchObservedRunningTime="2026-03-06 01:50:52.137897388 +0000 UTC m=+48.375466285" Mar 6 01:50:54.436937 containerd[1469]: time="2026-03-06T01:50:54.436812602Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:50:54.438532 containerd[1469]: time="2026-03-06T01:50:54.438388190Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 6 01:50:54.439963 containerd[1469]: time="2026-03-06T01:50:54.439896444Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:50:54.444538 containerd[1469]: time="2026-03-06T01:50:54.444418147Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:50:54.445239 containerd[1469]: time="2026-03-06T01:50:54.445155182Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 2.726492014s" Mar 6 01:50:54.445278 containerd[1469]: time="2026-03-06T01:50:54.445252163Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 6 01:50:54.447197 containerd[1469]: time="2026-03-06T01:50:54.447141350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 6 01:50:54.453351 containerd[1469]: time="2026-03-06T01:50:54.453151268Z" level=info msg="CreateContainer within sandbox \"e8ec1a6484fc0ee121bf90b37cf3d7a48908db58bd7f2cd45a3818d0ae61e8cc\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 6 01:50:54.487047 containerd[1469]: time="2026-03-06T01:50:54.486936163Z" level=info msg="CreateContainer within sandbox \"e8ec1a6484fc0ee121bf90b37cf3d7a48908db58bd7f2cd45a3818d0ae61e8cc\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"4b4675b12098e3ad8d12328d4c614223c48db8ef417f1f5829e9b828513ab3a7\"" Mar 6 01:50:54.488470 containerd[1469]: time="2026-03-06T01:50:54.488389606Z" level=info msg="StartContainer for \"4b4675b12098e3ad8d12328d4c614223c48db8ef417f1f5829e9b828513ab3a7\"" Mar 6 01:50:54.544491 systemd[1]: Started cri-containerd-4b4675b12098e3ad8d12328d4c614223c48db8ef417f1f5829e9b828513ab3a7.scope - libcontainer container 4b4675b12098e3ad8d12328d4c614223c48db8ef417f1f5829e9b828513ab3a7. Mar 6 01:50:54.608590 containerd[1469]: time="2026-03-06T01:50:54.607902372Z" level=info msg="StartContainer for \"4b4675b12098e3ad8d12328d4c614223c48db8ef417f1f5829e9b828513ab3a7\" returns successfully" Mar 6 01:50:55.170559 containerd[1469]: time="2026-03-06T01:50:55.170464666Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:50:55.172285 containerd[1469]: time="2026-03-06T01:50:55.171576991Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 6 01:50:55.173180 containerd[1469]: time="2026-03-06T01:50:55.173113796Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:50:55.177298 containerd[1469]: time="2026-03-06T01:50:55.177172182Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:50:55.177942 containerd[1469]: time="2026-03-06T01:50:55.177836758Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 730.666243ms" Mar 6 01:50:55.177942 containerd[1469]: time="2026-03-06T01:50:55.177879356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 6 01:50:55.179588 containerd[1469]: time="2026-03-06T01:50:55.179289276Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 6 01:50:55.185473 containerd[1469]: time="2026-03-06T01:50:55.185378368Z" level=info msg="CreateContainer within sandbox \"58c3af453b3969b7bd2fbeafb448ec91cd88b7f965e1a0ba82bfad3d3e0bd9c2\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 6 01:50:55.209797 containerd[1469]: time="2026-03-06T01:50:55.209703326Z" level=info msg="CreateContainer within sandbox \"58c3af453b3969b7bd2fbeafb448ec91cd88b7f965e1a0ba82bfad3d3e0bd9c2\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"e523f5152ea11d0d1266f927d83d163ad2a924dd67da11562f051bd883b4e2cb\"" Mar 6 01:50:55.210689 containerd[1469]: time="2026-03-06T01:50:55.210669921Z" level=info msg="StartContainer for \"e523f5152ea11d0d1266f927d83d163ad2a924dd67da11562f051bd883b4e2cb\"" Mar 6 01:50:55.264483 systemd[1]: Started cri-containerd-e523f5152ea11d0d1266f927d83d163ad2a924dd67da11562f051bd883b4e2cb.scope - libcontainer container e523f5152ea11d0d1266f927d83d163ad2a924dd67da11562f051bd883b4e2cb. Mar 6 01:50:55.341999 containerd[1469]: time="2026-03-06T01:50:55.341925233Z" level=info msg="StartContainer for \"e523f5152ea11d0d1266f927d83d163ad2a924dd67da11562f051bd883b4e2cb\" returns successfully" Mar 6 01:50:56.842732 containerd[1469]: time="2026-03-06T01:50:56.842642908Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:50:56.843739 containerd[1469]: time="2026-03-06T01:50:56.843675114Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 6 01:50:56.845107 containerd[1469]: time="2026-03-06T01:50:56.845014681Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:50:56.849925 containerd[1469]: time="2026-03-06T01:50:56.849872635Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 1.67055745s" Mar 6 01:50:56.849925 containerd[1469]: time="2026-03-06T01:50:56.849902781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 6 01:50:56.850284 containerd[1469]: time="2026-03-06T01:50:56.850087175Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:50:56.871109 containerd[1469]: time="2026-03-06T01:50:56.871051881Z" level=info msg="CreateContainer within sandbox \"e8ec1a6484fc0ee121bf90b37cf3d7a48908db58bd7f2cd45a3818d0ae61e8cc\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 6 01:50:56.877783 containerd[1469]: time="2026-03-06T01:50:56.877666367Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 6 01:50:56.890975 containerd[1469]: time="2026-03-06T01:50:56.890898423Z" level=info msg="CreateContainer within sandbox \"e8ec1a6484fc0ee121bf90b37cf3d7a48908db58bd7f2cd45a3818d0ae61e8cc\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"858c57bb85d676a3d90fde1448ed1f79b00a40f4d4951bd4df4c4877863dde8c\"" Mar 6 01:50:56.894338 containerd[1469]: time="2026-03-06T01:50:56.891886722Z" level=info msg="StartContainer for \"858c57bb85d676a3d90fde1448ed1f79b00a40f4d4951bd4df4c4877863dde8c\"" Mar 6 01:50:56.938419 systemd[1]: Started cri-containerd-858c57bb85d676a3d90fde1448ed1f79b00a40f4d4951bd4df4c4877863dde8c.scope - libcontainer container 858c57bb85d676a3d90fde1448ed1f79b00a40f4d4951bd4df4c4877863dde8c. Mar 6 01:50:56.984906 containerd[1469]: time="2026-03-06T01:50:56.984819331Z" level=info msg="StartContainer for \"858c57bb85d676a3d90fde1448ed1f79b00a40f4d4951bd4df4c4877863dde8c\" returns successfully" Mar 6 01:50:57.430478 kubelet[2576]: I0306 01:50:57.430404 2576 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 6 01:50:57.431545 kubelet[2576]: I0306 01:50:57.431444 2576 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 6 01:50:58.096346 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3108739342.mount: Deactivated successfully. Mar 6 01:50:58.135603 containerd[1469]: time="2026-03-06T01:50:58.135478781Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:50:58.136802 containerd[1469]: time="2026-03-06T01:50:58.136743470Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 6 01:50:58.138053 containerd[1469]: time="2026-03-06T01:50:58.138011293Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:50:58.141098 containerd[1469]: time="2026-03-06T01:50:58.141049762Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:50:58.142151 containerd[1469]: time="2026-03-06T01:50:58.142118865Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.264389781s" Mar 6 01:50:58.142277 containerd[1469]: time="2026-03-06T01:50:58.142155835Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 6 01:50:58.148792 containerd[1469]: time="2026-03-06T01:50:58.148737824Z" level=info msg="CreateContainer within sandbox \"58c3af453b3969b7bd2fbeafb448ec91cd88b7f965e1a0ba82bfad3d3e0bd9c2\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 6 01:50:58.168917 containerd[1469]: time="2026-03-06T01:50:58.168822030Z" level=info msg="CreateContainer within sandbox \"58c3af453b3969b7bd2fbeafb448ec91cd88b7f965e1a0ba82bfad3d3e0bd9c2\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"8ffce8b2b53dbf1a4f6d7daafe7041fc53491c80abcbcbf39b119e7a8c410d91\"" Mar 6 01:50:58.170071 containerd[1469]: time="2026-03-06T01:50:58.170044515Z" level=info msg="StartContainer for \"8ffce8b2b53dbf1a4f6d7daafe7041fc53491c80abcbcbf39b119e7a8c410d91\"" Mar 6 01:50:58.220624 systemd[1]: Started cri-containerd-8ffce8b2b53dbf1a4f6d7daafe7041fc53491c80abcbcbf39b119e7a8c410d91.scope - libcontainer container 8ffce8b2b53dbf1a4f6d7daafe7041fc53491c80abcbcbf39b119e7a8c410d91. Mar 6 01:50:58.279894 containerd[1469]: time="2026-03-06T01:50:58.279784278Z" level=info msg="StartContainer for \"8ffce8b2b53dbf1a4f6d7daafe7041fc53491c80abcbcbf39b119e7a8c410d91\" returns successfully" Mar 6 01:50:59.006816 kubelet[2576]: I0306 01:50:59.006653 2576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-66cb699ccf-49w4k" podStartSLOduration=1.949469597 podStartE2EDuration="14.00663793s" podCreationTimestamp="2026-03-06 01:50:45 +0000 UTC" firstStartedPulling="2026-03-06 01:50:46.08674982 +0000 UTC m=+42.324318717" lastFinishedPulling="2026-03-06 01:50:58.143918153 +0000 UTC m=+54.381487050" observedRunningTime="2026-03-06 01:50:59.006480717 +0000 UTC m=+55.244049615" watchObservedRunningTime="2026-03-06 01:50:59.00663793 +0000 UTC m=+55.244206827" Mar 6 01:50:59.008852 kubelet[2576]: I0306 01:50:59.007121 2576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-xv9jc" podStartSLOduration=25.310777546 podStartE2EDuration="37.0071149s" podCreationTimestamp="2026-03-06 01:50:22 +0000 UTC" firstStartedPulling="2026-03-06 01:50:45.168368893 +0000 UTC m=+41.405937791" lastFinishedPulling="2026-03-06 01:50:56.864706248 +0000 UTC m=+53.102275145" observedRunningTime="2026-03-06 01:50:58.005334425 +0000 UTC m=+54.242903573" watchObservedRunningTime="2026-03-06 01:50:59.0071149 +0000 UTC m=+55.244683798" Mar 6 01:51:02.627005 systemd[1]: Started sshd@9-10.0.0.139:22-10.0.0.1:55152.service - OpenSSH per-connection server daemon (10.0.0.1:55152). Mar 6 01:51:02.729295 sshd[5222]: Accepted publickey for core from 10.0.0.1 port 55152 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:51:02.731702 sshd[5222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:51:02.738367 systemd-logind[1447]: New session 10 of user core. Mar 6 01:51:02.751427 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 6 01:51:03.200635 sshd[5222]: pam_unix(sshd:session): session closed for user core Mar 6 01:51:03.205610 systemd[1]: sshd@9-10.0.0.139:22-10.0.0.1:55152.service: Deactivated successfully. Mar 6 01:51:03.208634 systemd[1]: session-10.scope: Deactivated successfully. Mar 6 01:51:03.209943 systemd-logind[1447]: Session 10 logged out. Waiting for processes to exit. Mar 6 01:51:03.211338 systemd-logind[1447]: Removed session 10. Mar 6 01:51:04.113428 containerd[1469]: time="2026-03-06T01:51:04.113011333Z" level=info msg="StopPodSandbox for \"b3edfa8dea300e40160885510f221aabc9dbf2ba81fafe7743367a028bf4969d\"" Mar 6 01:51:04.304164 containerd[1469]: 2026-03-06 01:51:04.222 [WARNING][5259] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b3edfa8dea300e40160885510f221aabc9dbf2ba81fafe7743367a028bf4969d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--k659t-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"8ed54e90-4fd2-4aae-b446-8c5b8cd922cd", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 50, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8774b4969c0f1d1fad189a336bc83cc0ae7dec5aabffd4b1dddae9cc14e68a68", Pod:"coredns-66bc5c9577-k659t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali700c9b2467a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:51:04.304164 containerd[1469]: 2026-03-06 01:51:04.222 [INFO][5259] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b3edfa8dea300e40160885510f221aabc9dbf2ba81fafe7743367a028bf4969d" Mar 6 01:51:04.304164 containerd[1469]: 2026-03-06 01:51:04.223 [INFO][5259] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b3edfa8dea300e40160885510f221aabc9dbf2ba81fafe7743367a028bf4969d" iface="eth0" netns="" Mar 6 01:51:04.304164 containerd[1469]: 2026-03-06 01:51:04.223 [INFO][5259] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b3edfa8dea300e40160885510f221aabc9dbf2ba81fafe7743367a028bf4969d" Mar 6 01:51:04.304164 containerd[1469]: 2026-03-06 01:51:04.223 [INFO][5259] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b3edfa8dea300e40160885510f221aabc9dbf2ba81fafe7743367a028bf4969d" Mar 6 01:51:04.304164 containerd[1469]: 2026-03-06 01:51:04.285 [INFO][5267] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b3edfa8dea300e40160885510f221aabc9dbf2ba81fafe7743367a028bf4969d" HandleID="k8s-pod-network.b3edfa8dea300e40160885510f221aabc9dbf2ba81fafe7743367a028bf4969d" Workload="localhost-k8s-coredns--66bc5c9577--k659t-eth0" Mar 6 01:51:04.304164 containerd[1469]: 2026-03-06 01:51:04.285 [INFO][5267] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:51:04.304164 containerd[1469]: 2026-03-06 01:51:04.286 [INFO][5267] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:51:04.304164 containerd[1469]: 2026-03-06 01:51:04.294 [WARNING][5267] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b3edfa8dea300e40160885510f221aabc9dbf2ba81fafe7743367a028bf4969d" HandleID="k8s-pod-network.b3edfa8dea300e40160885510f221aabc9dbf2ba81fafe7743367a028bf4969d" Workload="localhost-k8s-coredns--66bc5c9577--k659t-eth0" Mar 6 01:51:04.304164 containerd[1469]: 2026-03-06 01:51:04.294 [INFO][5267] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b3edfa8dea300e40160885510f221aabc9dbf2ba81fafe7743367a028bf4969d" HandleID="k8s-pod-network.b3edfa8dea300e40160885510f221aabc9dbf2ba81fafe7743367a028bf4969d" Workload="localhost-k8s-coredns--66bc5c9577--k659t-eth0" Mar 6 01:51:04.304164 containerd[1469]: 2026-03-06 01:51:04.296 [INFO][5267] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:51:04.304164 containerd[1469]: 2026-03-06 01:51:04.300 [INFO][5259] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b3edfa8dea300e40160885510f221aabc9dbf2ba81fafe7743367a028bf4969d" Mar 6 01:51:04.311563 containerd[1469]: time="2026-03-06T01:51:04.311489643Z" level=info msg="TearDown network for sandbox \"b3edfa8dea300e40160885510f221aabc9dbf2ba81fafe7743367a028bf4969d\" successfully" Mar 6 01:51:04.311563 containerd[1469]: time="2026-03-06T01:51:04.311556949Z" level=info msg="StopPodSandbox for \"b3edfa8dea300e40160885510f221aabc9dbf2ba81fafe7743367a028bf4969d\" returns successfully" Mar 6 01:51:04.334203 containerd[1469]: time="2026-03-06T01:51:04.334134334Z" level=info msg="RemovePodSandbox for \"b3edfa8dea300e40160885510f221aabc9dbf2ba81fafe7743367a028bf4969d\"" Mar 6 01:51:04.336077 containerd[1469]: time="2026-03-06T01:51:04.335994079Z" level=info msg="Forcibly stopping sandbox \"b3edfa8dea300e40160885510f221aabc9dbf2ba81fafe7743367a028bf4969d\"" Mar 6 01:51:04.491039 containerd[1469]: 2026-03-06 01:51:04.407 [WARNING][5286] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b3edfa8dea300e40160885510f221aabc9dbf2ba81fafe7743367a028bf4969d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--k659t-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"8ed54e90-4fd2-4aae-b446-8c5b8cd922cd", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 50, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8774b4969c0f1d1fad189a336bc83cc0ae7dec5aabffd4b1dddae9cc14e68a68", Pod:"coredns-66bc5c9577-k659t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali700c9b2467a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:51:04.491039 containerd[1469]: 2026-03-06 01:51:04.409 [INFO][5286] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b3edfa8dea300e40160885510f221aabc9dbf2ba81fafe7743367a028bf4969d" Mar 6 01:51:04.491039 containerd[1469]: 2026-03-06 01:51:04.409 [INFO][5286] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b3edfa8dea300e40160885510f221aabc9dbf2ba81fafe7743367a028bf4969d" iface="eth0" netns="" Mar 6 01:51:04.491039 containerd[1469]: 2026-03-06 01:51:04.409 [INFO][5286] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b3edfa8dea300e40160885510f221aabc9dbf2ba81fafe7743367a028bf4969d" Mar 6 01:51:04.491039 containerd[1469]: 2026-03-06 01:51:04.409 [INFO][5286] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b3edfa8dea300e40160885510f221aabc9dbf2ba81fafe7743367a028bf4969d" Mar 6 01:51:04.491039 containerd[1469]: 2026-03-06 01:51:04.475 [INFO][5294] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b3edfa8dea300e40160885510f221aabc9dbf2ba81fafe7743367a028bf4969d" HandleID="k8s-pod-network.b3edfa8dea300e40160885510f221aabc9dbf2ba81fafe7743367a028bf4969d" Workload="localhost-k8s-coredns--66bc5c9577--k659t-eth0" Mar 6 01:51:04.491039 containerd[1469]: 2026-03-06 01:51:04.476 [INFO][5294] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:51:04.491039 containerd[1469]: 2026-03-06 01:51:04.476 [INFO][5294] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:51:04.491039 containerd[1469]: 2026-03-06 01:51:04.482 [WARNING][5294] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b3edfa8dea300e40160885510f221aabc9dbf2ba81fafe7743367a028bf4969d" HandleID="k8s-pod-network.b3edfa8dea300e40160885510f221aabc9dbf2ba81fafe7743367a028bf4969d" Workload="localhost-k8s-coredns--66bc5c9577--k659t-eth0" Mar 6 01:51:04.491039 containerd[1469]: 2026-03-06 01:51:04.482 [INFO][5294] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b3edfa8dea300e40160885510f221aabc9dbf2ba81fafe7743367a028bf4969d" HandleID="k8s-pod-network.b3edfa8dea300e40160885510f221aabc9dbf2ba81fafe7743367a028bf4969d" Workload="localhost-k8s-coredns--66bc5c9577--k659t-eth0" Mar 6 01:51:04.491039 containerd[1469]: 2026-03-06 01:51:04.483 [INFO][5294] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:51:04.491039 containerd[1469]: 2026-03-06 01:51:04.488 [INFO][5286] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b3edfa8dea300e40160885510f221aabc9dbf2ba81fafe7743367a028bf4969d" Mar 6 01:51:04.492400 containerd[1469]: time="2026-03-06T01:51:04.491049216Z" level=info msg="TearDown network for sandbox \"b3edfa8dea300e40160885510f221aabc9dbf2ba81fafe7743367a028bf4969d\" successfully" Mar 6 01:51:04.497291 containerd[1469]: time="2026-03-06T01:51:04.497185295Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b3edfa8dea300e40160885510f221aabc9dbf2ba81fafe7743367a028bf4969d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 6 01:51:04.497361 containerd[1469]: time="2026-03-06T01:51:04.497331188Z" level=info msg="RemovePodSandbox \"b3edfa8dea300e40160885510f221aabc9dbf2ba81fafe7743367a028bf4969d\" returns successfully" Mar 6 01:51:04.514490 containerd[1469]: time="2026-03-06T01:51:04.514433517Z" level=info msg="StopPodSandbox for \"818b1552eda146362700bf8149049d29e41f1775a68f72ce65d131dc165f920b\"" Mar 6 01:51:04.606690 containerd[1469]: 2026-03-06 01:51:04.558 [WARNING][5311] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="818b1552eda146362700bf8149049d29e41f1775a68f72ce65d131dc165f920b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8459ffd5ff--d94vx-eth0", GenerateName:"calico-apiserver-8459ffd5ff-", Namespace:"calico-system", SelfLink:"", UID:"36c75156-d099-419f-a2ec-938f6d71a9bf", ResourceVersion:"1093", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 50, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8459ffd5ff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"daadc0b84de9403e3351f571c94af1c1b2078f56cdc3b7d830b13ca400ebd68e", Pod:"calico-apiserver-8459ffd5ff-d94vx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibc5ecd7fb88", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:51:04.606690 containerd[1469]: 2026-03-06 01:51:04.558 [INFO][5311] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="818b1552eda146362700bf8149049d29e41f1775a68f72ce65d131dc165f920b" Mar 6 01:51:04.606690 containerd[1469]: 2026-03-06 01:51:04.558 [INFO][5311] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="818b1552eda146362700bf8149049d29e41f1775a68f72ce65d131dc165f920b" iface="eth0" netns="" Mar 6 01:51:04.606690 containerd[1469]: 2026-03-06 01:51:04.558 [INFO][5311] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="818b1552eda146362700bf8149049d29e41f1775a68f72ce65d131dc165f920b" Mar 6 01:51:04.606690 containerd[1469]: 2026-03-06 01:51:04.558 [INFO][5311] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="818b1552eda146362700bf8149049d29e41f1775a68f72ce65d131dc165f920b" Mar 6 01:51:04.606690 containerd[1469]: 2026-03-06 01:51:04.590 [INFO][5321] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="818b1552eda146362700bf8149049d29e41f1775a68f72ce65d131dc165f920b" HandleID="k8s-pod-network.818b1552eda146362700bf8149049d29e41f1775a68f72ce65d131dc165f920b" Workload="localhost-k8s-calico--apiserver--8459ffd5ff--d94vx-eth0" Mar 6 01:51:04.606690 containerd[1469]: 2026-03-06 01:51:04.591 [INFO][5321] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:51:04.606690 containerd[1469]: 2026-03-06 01:51:04.591 [INFO][5321] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:51:04.606690 containerd[1469]: 2026-03-06 01:51:04.597 [WARNING][5321] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="818b1552eda146362700bf8149049d29e41f1775a68f72ce65d131dc165f920b" HandleID="k8s-pod-network.818b1552eda146362700bf8149049d29e41f1775a68f72ce65d131dc165f920b" Workload="localhost-k8s-calico--apiserver--8459ffd5ff--d94vx-eth0" Mar 6 01:51:04.606690 containerd[1469]: 2026-03-06 01:51:04.597 [INFO][5321] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="818b1552eda146362700bf8149049d29e41f1775a68f72ce65d131dc165f920b" HandleID="k8s-pod-network.818b1552eda146362700bf8149049d29e41f1775a68f72ce65d131dc165f920b" Workload="localhost-k8s-calico--apiserver--8459ffd5ff--d94vx-eth0" Mar 6 01:51:04.606690 containerd[1469]: 2026-03-06 01:51:04.599 [INFO][5321] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:51:04.606690 containerd[1469]: 2026-03-06 01:51:04.603 [INFO][5311] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="818b1552eda146362700bf8149049d29e41f1775a68f72ce65d131dc165f920b" Mar 6 01:51:04.607146 containerd[1469]: time="2026-03-06T01:51:04.606742827Z" level=info msg="TearDown network for sandbox \"818b1552eda146362700bf8149049d29e41f1775a68f72ce65d131dc165f920b\" successfully" Mar 6 01:51:04.607146 containerd[1469]: time="2026-03-06T01:51:04.606767323Z" level=info msg="StopPodSandbox for \"818b1552eda146362700bf8149049d29e41f1775a68f72ce65d131dc165f920b\" returns successfully" Mar 6 01:51:04.607557 containerd[1469]: time="2026-03-06T01:51:04.607502007Z" level=info msg="RemovePodSandbox for \"818b1552eda146362700bf8149049d29e41f1775a68f72ce65d131dc165f920b\"" Mar 6 01:51:04.607782 containerd[1469]: time="2026-03-06T01:51:04.607600760Z" level=info msg="Forcibly stopping sandbox \"818b1552eda146362700bf8149049d29e41f1775a68f72ce65d131dc165f920b\"" Mar 6 01:51:04.708635 containerd[1469]: 2026-03-06 01:51:04.659 [WARNING][5339] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="818b1552eda146362700bf8149049d29e41f1775a68f72ce65d131dc165f920b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8459ffd5ff--d94vx-eth0", GenerateName:"calico-apiserver-8459ffd5ff-", Namespace:"calico-system", SelfLink:"", UID:"36c75156-d099-419f-a2ec-938f6d71a9bf", ResourceVersion:"1093", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 50, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8459ffd5ff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"daadc0b84de9403e3351f571c94af1c1b2078f56cdc3b7d830b13ca400ebd68e", Pod:"calico-apiserver-8459ffd5ff-d94vx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibc5ecd7fb88", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:51:04.708635 containerd[1469]: 2026-03-06 01:51:04.660 [INFO][5339] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="818b1552eda146362700bf8149049d29e41f1775a68f72ce65d131dc165f920b" Mar 6 01:51:04.708635 containerd[1469]: 2026-03-06 01:51:04.660 [INFO][5339] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="818b1552eda146362700bf8149049d29e41f1775a68f72ce65d131dc165f920b" iface="eth0" netns="" Mar 6 01:51:04.708635 containerd[1469]: 2026-03-06 01:51:04.660 [INFO][5339] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="818b1552eda146362700bf8149049d29e41f1775a68f72ce65d131dc165f920b" Mar 6 01:51:04.708635 containerd[1469]: 2026-03-06 01:51:04.660 [INFO][5339] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="818b1552eda146362700bf8149049d29e41f1775a68f72ce65d131dc165f920b" Mar 6 01:51:04.708635 containerd[1469]: 2026-03-06 01:51:04.692 [INFO][5347] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="818b1552eda146362700bf8149049d29e41f1775a68f72ce65d131dc165f920b" HandleID="k8s-pod-network.818b1552eda146362700bf8149049d29e41f1775a68f72ce65d131dc165f920b" Workload="localhost-k8s-calico--apiserver--8459ffd5ff--d94vx-eth0" Mar 6 01:51:04.708635 containerd[1469]: 2026-03-06 01:51:04.693 [INFO][5347] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:51:04.708635 containerd[1469]: 2026-03-06 01:51:04.693 [INFO][5347] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:51:04.708635 containerd[1469]: 2026-03-06 01:51:04.700 [WARNING][5347] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="818b1552eda146362700bf8149049d29e41f1775a68f72ce65d131dc165f920b" HandleID="k8s-pod-network.818b1552eda146362700bf8149049d29e41f1775a68f72ce65d131dc165f920b" Workload="localhost-k8s-calico--apiserver--8459ffd5ff--d94vx-eth0" Mar 6 01:51:04.708635 containerd[1469]: 2026-03-06 01:51:04.700 [INFO][5347] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="818b1552eda146362700bf8149049d29e41f1775a68f72ce65d131dc165f920b" HandleID="k8s-pod-network.818b1552eda146362700bf8149049d29e41f1775a68f72ce65d131dc165f920b" Workload="localhost-k8s-calico--apiserver--8459ffd5ff--d94vx-eth0" Mar 6 01:51:04.708635 containerd[1469]: 2026-03-06 01:51:04.702 [INFO][5347] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:51:04.708635 containerd[1469]: 2026-03-06 01:51:04.705 [INFO][5339] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="818b1552eda146362700bf8149049d29e41f1775a68f72ce65d131dc165f920b" Mar 6 01:51:04.709259 containerd[1469]: time="2026-03-06T01:51:04.708666074Z" level=info msg="TearDown network for sandbox \"818b1552eda146362700bf8149049d29e41f1775a68f72ce65d131dc165f920b\" successfully" Mar 6 01:51:04.720475 containerd[1469]: time="2026-03-06T01:51:04.720397101Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"818b1552eda146362700bf8149049d29e41f1775a68f72ce65d131dc165f920b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 6 01:51:04.720577 containerd[1469]: time="2026-03-06T01:51:04.720504042Z" level=info msg="RemovePodSandbox \"818b1552eda146362700bf8149049d29e41f1775a68f72ce65d131dc165f920b\" returns successfully" Mar 6 01:51:08.225782 systemd[1]: Started sshd@10-10.0.0.139:22-10.0.0.1:55162.service - OpenSSH per-connection server daemon (10.0.0.1:55162). Mar 6 01:51:08.319218 sshd[5371]: Accepted publickey for core from 10.0.0.1 port 55162 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:51:08.321348 sshd[5371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:51:08.327076 systemd-logind[1447]: New session 11 of user core. Mar 6 01:51:08.340446 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 6 01:51:08.515907 sshd[5371]: pam_unix(sshd:session): session closed for user core Mar 6 01:51:08.519813 systemd[1]: sshd@10-10.0.0.139:22-10.0.0.1:55162.service: Deactivated successfully. Mar 6 01:51:08.523021 systemd[1]: session-11.scope: Deactivated successfully. Mar 6 01:51:08.524674 systemd-logind[1447]: Session 11 logged out. Waiting for processes to exit. Mar 6 01:51:08.526568 systemd-logind[1447]: Removed session 11. Mar 6 01:51:13.534166 systemd[1]: Started sshd@11-10.0.0.139:22-10.0.0.1:54896.service - OpenSSH per-connection server daemon (10.0.0.1:54896). Mar 6 01:51:13.587891 sshd[5390]: Accepted publickey for core from 10.0.0.1 port 54896 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:51:13.590376 sshd[5390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:51:13.596874 systemd-logind[1447]: New session 12 of user core. Mar 6 01:51:13.605478 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 6 01:51:13.773071 sshd[5390]: pam_unix(sshd:session): session closed for user core Mar 6 01:51:13.778772 systemd[1]: sshd@11-10.0.0.139:22-10.0.0.1:54896.service: Deactivated successfully. Mar 6 01:51:13.780995 systemd[1]: session-12.scope: Deactivated successfully. Mar 6 01:51:13.782136 systemd-logind[1447]: Session 12 logged out. Waiting for processes to exit. Mar 6 01:51:13.784468 systemd-logind[1447]: Removed session 12. Mar 6 01:51:15.133487 kubelet[2576]: E0306 01:51:15.133392 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:51:18.135073 kubelet[2576]: E0306 01:51:18.134665 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:51:18.790028 systemd[1]: Started sshd@12-10.0.0.139:22-10.0.0.1:54900.service - OpenSSH per-connection server daemon (10.0.0.1:54900). Mar 6 01:51:18.838741 sshd[5407]: Accepted publickey for core from 10.0.0.1 port 54900 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:51:18.840633 sshd[5407]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:51:18.846268 systemd-logind[1447]: New session 13 of user core. Mar 6 01:51:18.853394 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 6 01:51:18.996176 sshd[5407]: pam_unix(sshd:session): session closed for user core Mar 6 01:51:19.008769 systemd[1]: sshd@12-10.0.0.139:22-10.0.0.1:54900.service: Deactivated successfully. Mar 6 01:51:19.010832 systemd[1]: session-13.scope: Deactivated successfully. Mar 6 01:51:19.012852 systemd-logind[1447]: Session 13 logged out. Waiting for processes to exit. Mar 6 01:51:19.022737 systemd[1]: Started sshd@13-10.0.0.139:22-10.0.0.1:54912.service - OpenSSH per-connection server daemon (10.0.0.1:54912). Mar 6 01:51:19.024261 systemd-logind[1447]: Removed session 13. Mar 6 01:51:19.055073 sshd[5423]: Accepted publickey for core from 10.0.0.1 port 54912 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:51:19.056999 sshd[5423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:51:19.062594 systemd-logind[1447]: New session 14 of user core. Mar 6 01:51:19.069440 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 6 01:51:19.269719 sshd[5423]: pam_unix(sshd:session): session closed for user core Mar 6 01:51:19.286074 systemd[1]: sshd@13-10.0.0.139:22-10.0.0.1:54912.service: Deactivated successfully. Mar 6 01:51:19.288944 systemd[1]: session-14.scope: Deactivated successfully. Mar 6 01:51:19.293965 systemd-logind[1447]: Session 14 logged out. Waiting for processes to exit. Mar 6 01:51:19.304853 systemd[1]: Started sshd@14-10.0.0.139:22-10.0.0.1:54926.service - OpenSSH per-connection server daemon (10.0.0.1:54926). Mar 6 01:51:19.306318 systemd-logind[1447]: Removed session 14. Mar 6 01:51:19.371519 sshd[5435]: Accepted publickey for core from 10.0.0.1 port 54926 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:51:19.374009 sshd[5435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:51:19.385683 systemd-logind[1447]: New session 15 of user core. Mar 6 01:51:19.392964 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 6 01:51:19.581433 sshd[5435]: pam_unix(sshd:session): session closed for user core Mar 6 01:51:19.586133 systemd-logind[1447]: Session 15 logged out. Waiting for processes to exit. Mar 6 01:51:19.587466 systemd[1]: sshd@14-10.0.0.139:22-10.0.0.1:54926.service: Deactivated successfully. Mar 6 01:51:19.589942 systemd[1]: session-15.scope: Deactivated successfully. Mar 6 01:51:19.591697 systemd-logind[1447]: Removed session 15. Mar 6 01:51:20.976440 systemd[1]: run-containerd-runc-k8s.io-e28a7f5e7c80eaaac2fbeb545a6a57f4ee18cd44ad28254ff1465375ff18f324-runc.kAmfIw.mount: Deactivated successfully. Mar 6 01:51:24.604019 systemd[1]: Started sshd@15-10.0.0.139:22-10.0.0.1:35018.service - OpenSSH per-connection server daemon (10.0.0.1:35018). Mar 6 01:51:24.710181 sshd[5553]: Accepted publickey for core from 10.0.0.1 port 35018 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:51:24.711872 sshd[5553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:51:24.720360 systemd-logind[1447]: New session 16 of user core. Mar 6 01:51:24.726596 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 6 01:51:24.976539 sshd[5553]: pam_unix(sshd:session): session closed for user core Mar 6 01:51:24.982900 systemd[1]: sshd@15-10.0.0.139:22-10.0.0.1:35018.service: Deactivated successfully. Mar 6 01:51:24.986810 systemd[1]: session-16.scope: Deactivated successfully. Mar 6 01:51:24.989920 systemd-logind[1447]: Session 16 logged out. Waiting for processes to exit. Mar 6 01:51:24.992080 systemd-logind[1447]: Removed session 16. Mar 6 01:51:27.134834 kubelet[2576]: E0306 01:51:27.134704 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:51:27.135676 kubelet[2576]: E0306 01:51:27.134931 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:51:29.988478 systemd[1]: Started sshd@16-10.0.0.139:22-10.0.0.1:35022.service - OpenSSH per-connection server daemon (10.0.0.1:35022). Mar 6 01:51:30.027902 sshd[5584]: Accepted publickey for core from 10.0.0.1 port 35022 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:51:30.030622 sshd[5584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:51:30.036519 systemd-logind[1447]: New session 17 of user core. Mar 6 01:51:30.044662 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 6 01:51:30.191933 sshd[5584]: pam_unix(sshd:session): session closed for user core Mar 6 01:51:30.203557 systemd[1]: sshd@16-10.0.0.139:22-10.0.0.1:35022.service: Deactivated successfully. Mar 6 01:51:30.206435 systemd[1]: session-17.scope: Deactivated successfully. Mar 6 01:51:30.210098 systemd-logind[1447]: Session 17 logged out. Waiting for processes to exit. Mar 6 01:51:30.218905 systemd[1]: Started sshd@17-10.0.0.139:22-10.0.0.1:51440.service - OpenSSH per-connection server daemon (10.0.0.1:51440). Mar 6 01:51:30.220937 systemd-logind[1447]: Removed session 17. Mar 6 01:51:30.252188 sshd[5598]: Accepted publickey for core from 10.0.0.1 port 51440 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:51:30.254655 sshd[5598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:51:30.263051 systemd-logind[1447]: New session 18 of user core. Mar 6 01:51:30.271701 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 6 01:51:30.688130 sshd[5598]: pam_unix(sshd:session): session closed for user core Mar 6 01:51:30.696200 systemd[1]: sshd@17-10.0.0.139:22-10.0.0.1:51440.service: Deactivated successfully. Mar 6 01:51:30.698783 systemd[1]: session-18.scope: Deactivated successfully. Mar 6 01:51:30.701068 systemd-logind[1447]: Session 18 logged out. Waiting for processes to exit. Mar 6 01:51:30.706907 systemd[1]: Started sshd@18-10.0.0.139:22-10.0.0.1:51446.service - OpenSSH per-connection server daemon (10.0.0.1:51446). Mar 6 01:51:30.708401 systemd-logind[1447]: Removed session 18. Mar 6 01:51:30.786929 sshd[5610]: Accepted publickey for core from 10.0.0.1 port 51446 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:51:30.789183 sshd[5610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:51:30.795631 systemd-logind[1447]: New session 19 of user core. Mar 6 01:51:30.802496 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 6 01:51:31.454551 sshd[5610]: pam_unix(sshd:session): session closed for user core Mar 6 01:51:31.469630 systemd[1]: Started sshd@19-10.0.0.139:22-10.0.0.1:51448.service - OpenSSH per-connection server daemon (10.0.0.1:51448). Mar 6 01:51:31.470127 systemd[1]: sshd@18-10.0.0.139:22-10.0.0.1:51446.service: Deactivated successfully. Mar 6 01:51:31.472069 systemd[1]: session-19.scope: Deactivated successfully. Mar 6 01:51:31.478799 systemd-logind[1447]: Session 19 logged out. Waiting for processes to exit. Mar 6 01:51:31.480940 systemd-logind[1447]: Removed session 19. Mar 6 01:51:31.507397 sshd[5634]: Accepted publickey for core from 10.0.0.1 port 51448 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:51:31.509430 sshd[5634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:51:31.515593 systemd-logind[1447]: New session 20 of user core. Mar 6 01:51:31.530415 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 6 01:51:31.838543 sshd[5634]: pam_unix(sshd:session): session closed for user core Mar 6 01:51:31.847834 systemd[1]: sshd@19-10.0.0.139:22-10.0.0.1:51448.service: Deactivated successfully. Mar 6 01:51:31.850626 systemd[1]: session-20.scope: Deactivated successfully. Mar 6 01:51:31.852976 systemd-logind[1447]: Session 20 logged out. Waiting for processes to exit. Mar 6 01:51:31.865845 systemd[1]: Started sshd@20-10.0.0.139:22-10.0.0.1:51452.service - OpenSSH per-connection server daemon (10.0.0.1:51452). Mar 6 01:51:31.870101 systemd-logind[1447]: Removed session 20. Mar 6 01:51:31.925941 sshd[5649]: Accepted publickey for core from 10.0.0.1 port 51452 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:51:31.936678 sshd[5649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:51:31.949008 systemd-logind[1447]: New session 21 of user core. Mar 6 01:51:31.959463 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 6 01:51:32.178194 sshd[5649]: pam_unix(sshd:session): session closed for user core Mar 6 01:51:32.181881 systemd[1]: sshd@20-10.0.0.139:22-10.0.0.1:51452.service: Deactivated successfully. Mar 6 01:51:32.184172 systemd[1]: session-21.scope: Deactivated successfully. Mar 6 01:51:32.186197 systemd-logind[1447]: Session 21 logged out. Waiting for processes to exit. Mar 6 01:51:32.188453 systemd-logind[1447]: Removed session 21. Mar 6 01:51:37.198955 systemd[1]: Started sshd@21-10.0.0.139:22-10.0.0.1:51468.service - OpenSSH per-connection server daemon (10.0.0.1:51468). Mar 6 01:51:37.246314 sshd[5665]: Accepted publickey for core from 10.0.0.1 port 51468 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:51:37.249175 sshd[5665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:51:37.256387 systemd-logind[1447]: New session 22 of user core. Mar 6 01:51:37.264540 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 6 01:51:37.429753 sshd[5665]: pam_unix(sshd:session): session closed for user core Mar 6 01:51:37.434999 systemd[1]: sshd@21-10.0.0.139:22-10.0.0.1:51468.service: Deactivated successfully. Mar 6 01:51:37.437641 systemd[1]: session-22.scope: Deactivated successfully. Mar 6 01:51:37.438852 systemd-logind[1447]: Session 22 logged out. Waiting for processes to exit. Mar 6 01:51:37.440644 systemd-logind[1447]: Removed session 22. Mar 6 01:51:37.540741 kubelet[2576]: I0306 01:51:37.540651 2576 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 6 01:51:42.444500 systemd[1]: Started sshd@22-10.0.0.139:22-10.0.0.1:33734.service - OpenSSH per-connection server daemon (10.0.0.1:33734). Mar 6 01:51:42.528389 sshd[5685]: Accepted publickey for core from 10.0.0.1 port 33734 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:51:42.530544 sshd[5685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:51:42.536576 systemd-logind[1447]: New session 23 of user core. Mar 6 01:51:42.549449 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 6 01:51:42.730117 sshd[5685]: pam_unix(sshd:session): session closed for user core Mar 6 01:51:42.736442 systemd[1]: sshd@22-10.0.0.139:22-10.0.0.1:33734.service: Deactivated successfully. Mar 6 01:51:42.739136 systemd[1]: session-23.scope: Deactivated successfully. Mar 6 01:51:42.740399 systemd-logind[1447]: Session 23 logged out. Waiting for processes to exit. Mar 6 01:51:42.742666 systemd-logind[1447]: Removed session 23. Mar 6 01:51:47.755707 systemd[1]: Started sshd@23-10.0.0.139:22-10.0.0.1:33736.service - OpenSSH per-connection server daemon (10.0.0.1:33736). Mar 6 01:51:47.799563 sshd[5701]: Accepted publickey for core from 10.0.0.1 port 33736 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:51:47.802033 sshd[5701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:51:47.808069 systemd-logind[1447]: New session 24 of user core. Mar 6 01:51:47.822527 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 6 01:51:47.975122 sshd[5701]: pam_unix(sshd:session): session closed for user core Mar 6 01:51:47.983017 systemd[1]: sshd@23-10.0.0.139:22-10.0.0.1:33736.service: Deactivated successfully. Mar 6 01:51:47.985353 systemd[1]: session-24.scope: Deactivated successfully. Mar 6 01:51:47.987513 systemd-logind[1447]: Session 24 logged out. Waiting for processes to exit. Mar 6 01:51:47.989485 systemd-logind[1447]: Removed session 24. Mar 6 01:51:48.134981 kubelet[2576]: E0306 01:51:48.134666 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"