Mar 6 01:41:27.098411 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Mar 5 23:31:42 -00 2026 Mar 6 01:41:27.098442 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a6bcd99e714cc2f1b95dc0d61d9d762252de26a434f12074c16f59200c97ba9c Mar 6 01:41:27.098461 kernel: BIOS-provided physical RAM map: Mar 6 01:41:27.098472 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 6 01:41:27.098482 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 6 01:41:27.098492 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 6 01:41:27.098503 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Mar 6 01:41:27.098513 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 6 01:41:27.098575 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Mar 6 01:41:27.098587 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Mar 6 01:41:27.098602 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Mar 6 01:41:27.098613 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Mar 6 01:41:27.098624 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Mar 6 01:41:27.098634 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Mar 6 01:41:27.098646 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Mar 6 01:41:27.098658 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 6 01:41:27.098673 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Mar 6 01:41:27.098684 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Mar 6 01:41:27.098695 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 6 01:41:27.098706 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 6 01:41:27.098749 kernel: NX (Execute Disable) protection: active Mar 6 01:41:27.098761 kernel: APIC: Static calls initialized Mar 6 01:41:27.098772 kernel: efi: EFI v2.7 by EDK II Mar 6 01:41:27.098783 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Mar 6 01:41:27.098794 kernel: SMBIOS 2.8 present. Mar 6 01:41:27.098805 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Mar 6 01:41:27.098815 kernel: Hypervisor detected: KVM Mar 6 01:41:27.098832 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 6 01:41:27.098843 kernel: kvm-clock: using sched offset of 6474621582 cycles Mar 6 01:41:27.098855 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 6 01:41:27.098866 kernel: tsc: Detected 2445.426 MHz processor Mar 6 01:41:27.098878 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 6 01:41:27.098889 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 6 01:41:27.098901 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Mar 6 01:41:27.098913 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 6 01:41:27.098924 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 6 01:41:27.098939 kernel: Using GB pages for direct mapping Mar 6 01:41:27.098950 kernel: Secure boot disabled Mar 6 01:41:27.098962 kernel: ACPI: Early table checksum verification disabled Mar 6 01:41:27.098973 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Mar 6 01:41:27.098991 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Mar 6 01:41:27.099004 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 01:41:27.099017 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 01:41:27.099034 kernel: ACPI: FACS 0x000000009CBDD000 000040 Mar 6 01:41:27.099046 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 01:41:27.099058 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 01:41:27.099070 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 01:41:27.099083 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 01:41:27.099094 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Mar 6 01:41:27.099106 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Mar 6 01:41:27.099123 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Mar 6 01:41:27.099136 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Mar 6 01:41:27.099148 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Mar 6 01:41:27.099159 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Mar 6 01:41:27.099171 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Mar 6 01:41:27.099183 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Mar 6 01:41:27.099195 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Mar 6 01:41:27.099206 kernel: No NUMA configuration found Mar 6 01:41:27.099219 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Mar 6 01:41:27.099235 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Mar 6 01:41:27.099246 kernel: Zone ranges: Mar 6 01:41:27.099258 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 6 01:41:27.099271 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Mar 6 01:41:27.099283 kernel: Normal empty Mar 6 01:41:27.099295 kernel: Movable zone start for each node Mar 6 01:41:27.099307 kernel: Early memory node ranges Mar 6 01:41:27.099319 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 6 01:41:27.099330 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Mar 6 01:41:27.099342 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Mar 6 01:41:27.099359 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Mar 6 01:41:27.099371 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Mar 6 01:41:27.099383 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Mar 6 01:41:27.099394 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Mar 6 01:41:27.099407 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 6 01:41:27.099419 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 6 01:41:27.099430 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Mar 6 01:41:27.099443 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 6 01:41:27.099455 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Mar 6 01:41:27.099472 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Mar 6 01:41:27.099483 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Mar 6 01:41:27.099495 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 6 01:41:27.099507 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 6 01:41:27.099565 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 6 01:41:27.099579 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 6 01:41:27.099591 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 6 01:41:27.099604 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 6 01:41:27.099616 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 6 01:41:27.099633 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 6 01:41:27.099644 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 6 01:41:27.099656 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 6 01:41:27.099668 kernel: TSC deadline timer available Mar 6 01:41:27.099680 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 6 01:41:27.099691 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 6 01:41:27.099704 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 6 01:41:27.099745 kernel: kvm-guest: setup PV sched yield Mar 6 01:41:27.099758 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Mar 6 01:41:27.099775 kernel: Booting paravirtualized kernel on KVM Mar 6 01:41:27.099787 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 6 01:41:27.099799 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 6 01:41:27.099811 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 6 01:41:27.099823 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 6 01:41:27.099835 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 6 01:41:27.099848 kernel: kvm-guest: PV spinlocks enabled Mar 6 01:41:27.099859 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 6 01:41:27.099872 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a6bcd99e714cc2f1b95dc0d61d9d762252de26a434f12074c16f59200c97ba9c Mar 6 01:41:27.099889 kernel: random: crng init done Mar 6 01:41:27.099901 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 6 01:41:27.099913 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 6 01:41:27.099924 kernel: Fallback order for Node 0: 0 Mar 6 01:41:27.099936 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Mar 6 01:41:27.099947 kernel: Policy zone: DMA32 Mar 6 01:41:27.099959 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 6 01:41:27.099972 kernel: Memory: 2400616K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 166124K reserved, 0K cma-reserved) Mar 6 01:41:27.099989 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 6 01:41:27.100001 kernel: ftrace: allocating 37996 entries in 149 pages Mar 6 01:41:27.100013 kernel: ftrace: allocated 149 pages with 4 groups Mar 6 01:41:27.100025 kernel: Dynamic Preempt: voluntary Mar 6 01:41:27.100037 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 6 01:41:27.100064 kernel: rcu: RCU event tracing is enabled. Mar 6 01:41:27.100081 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 6 01:41:27.100094 kernel: Trampoline variant of Tasks RCU enabled. Mar 6 01:41:27.100106 kernel: Rude variant of Tasks RCU enabled. Mar 6 01:41:27.100118 kernel: Tracing variant of Tasks RCU enabled. Mar 6 01:41:27.100131 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 6 01:41:27.100144 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 6 01:41:27.100157 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 6 01:41:27.100174 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 6 01:41:27.100186 kernel: Console: colour dummy device 80x25 Mar 6 01:41:27.100198 kernel: printk: console [ttyS0] enabled Mar 6 01:41:27.100234 kernel: ACPI: Core revision 20230628 Mar 6 01:41:27.100247 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 6 01:41:27.100284 kernel: APIC: Switch to symmetric I/O mode setup Mar 6 01:41:27.100297 kernel: x2apic enabled Mar 6 01:41:27.100329 kernel: APIC: Switched APIC routing to: physical x2apic Mar 6 01:41:27.100342 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 6 01:41:27.100355 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 6 01:41:27.100386 kernel: kvm-guest: setup PV IPIs Mar 6 01:41:27.100400 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 6 01:41:27.100412 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 6 01:41:27.100424 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 6 01:41:27.100441 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 6 01:41:27.100455 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 6 01:41:27.100468 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 6 01:41:27.100480 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 6 01:41:27.100492 kernel: Spectre V2 : Mitigation: Retpolines Mar 6 01:41:27.100505 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 6 01:41:27.100518 kernel: Speculative Store Bypass: Vulnerable Mar 6 01:41:27.100666 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 6 01:41:27.100767 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 6 01:41:27.100781 kernel: active return thunk: srso_alias_return_thunk Mar 6 01:41:27.100815 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 6 01:41:27.100848 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 6 01:41:27.100897 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 6 01:41:27.100910 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 6 01:41:27.100959 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 6 01:41:27.100989 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 6 01:41:27.101020 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 6 01:41:27.101074 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 6 01:41:27.101107 kernel: Freeing SMP alternatives memory: 32K Mar 6 01:41:27.101119 kernel: pid_max: default: 32768 minimum: 301 Mar 6 01:41:27.101131 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 6 01:41:27.101144 kernel: landlock: Up and running. Mar 6 01:41:27.101156 kernel: SELinux: Initializing. Mar 6 01:41:27.101169 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 6 01:41:27.101181 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 6 01:41:27.101195 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 6 01:41:27.101212 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 6 01:41:27.101225 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 6 01:41:27.101237 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 6 01:41:27.101250 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 6 01:41:27.101263 kernel: signal: max sigframe size: 1776 Mar 6 01:41:27.101274 kernel: rcu: Hierarchical SRCU implementation. Mar 6 01:41:27.101288 kernel: rcu: Max phase no-delay instances is 400. Mar 6 01:41:27.101301 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 6 01:41:27.101318 kernel: smp: Bringing up secondary CPUs ... Mar 6 01:41:27.101331 kernel: smpboot: x86: Booting SMP configuration: Mar 6 01:41:27.101343 kernel: .... node #0, CPUs: #1 #2 #3 Mar 6 01:41:27.101355 kernel: smp: Brought up 1 node, 4 CPUs Mar 6 01:41:27.101368 kernel: smpboot: Max logical packages: 1 Mar 6 01:41:27.101380 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 6 01:41:27.101393 kernel: devtmpfs: initialized Mar 6 01:41:27.101405 kernel: x86/mm: Memory block size: 128MB Mar 6 01:41:27.101419 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Mar 6 01:41:27.101431 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Mar 6 01:41:27.101450 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Mar 6 01:41:27.101500 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Mar 6 01:41:27.101515 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Mar 6 01:41:27.101604 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 6 01:41:27.101617 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 6 01:41:27.101630 kernel: pinctrl core: initialized pinctrl subsystem Mar 6 01:41:27.101643 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 6 01:41:27.101655 kernel: audit: initializing netlink subsys (disabled) Mar 6 01:41:27.101673 kernel: audit: type=2000 audit(1772761284.878:1): state=initialized audit_enabled=0 res=1 Mar 6 01:41:27.101686 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 6 01:41:27.101698 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 6 01:41:27.101712 kernel: cpuidle: using governor menu Mar 6 01:41:27.101760 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 6 01:41:27.101773 kernel: dca service started, version 1.12.1 Mar 6 01:41:27.101786 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 6 01:41:27.101799 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 6 01:41:27.101811 kernel: PCI: Using configuration type 1 for base access Mar 6 01:41:27.101829 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 6 01:41:27.101841 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 6 01:41:27.101854 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 6 01:41:27.101867 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 6 01:41:27.101880 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 6 01:41:27.101893 kernel: ACPI: Added _OSI(Module Device) Mar 6 01:41:27.101905 kernel: ACPI: Added _OSI(Processor Device) Mar 6 01:41:27.101918 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 6 01:41:27.101931 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 6 01:41:27.101947 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 6 01:41:27.101959 kernel: ACPI: Interpreter enabled Mar 6 01:41:27.101972 kernel: ACPI: PM: (supports S0 S3 S5) Mar 6 01:41:27.101984 kernel: ACPI: Using IOAPIC for interrupt routing Mar 6 01:41:27.101998 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 6 01:41:27.102010 kernel: PCI: Using E820 reservations for host bridge windows Mar 6 01:41:27.102023 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 6 01:41:27.102036 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 6 01:41:27.102314 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 6 01:41:27.102593 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 6 01:41:27.102847 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 6 01:41:27.102866 kernel: PCI host bridge to bus 0000:00 Mar 6 01:41:27.103312 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 6 01:41:27.103656 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 6 01:41:27.103891 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 6 01:41:27.104087 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 6 01:41:27.104406 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 6 01:41:27.104674 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Mar 6 01:41:27.104907 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 6 01:41:27.105140 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 6 01:41:27.105358 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 6 01:41:27.105621 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Mar 6 01:41:27.105877 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Mar 6 01:41:27.106081 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Mar 6 01:41:27.106281 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Mar 6 01:41:27.106483 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 6 01:41:27.106851 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 6 01:41:27.107059 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Mar 6 01:41:27.107269 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Mar 6 01:41:27.107469 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Mar 6 01:41:27.107779 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 6 01:41:27.107991 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Mar 6 01:41:27.108194 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Mar 6 01:41:27.108395 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Mar 6 01:41:27.108687 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 6 01:41:27.108936 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Mar 6 01:41:27.109141 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Mar 6 01:41:27.109342 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Mar 6 01:41:27.109660 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Mar 6 01:41:27.109915 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 6 01:41:27.110119 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 6 01:41:27.110332 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 6 01:41:27.110598 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Mar 6 01:41:27.110841 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Mar 6 01:41:27.111054 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 6 01:41:27.111253 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Mar 6 01:41:27.111272 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 6 01:41:27.111285 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 6 01:41:27.111298 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 6 01:41:27.111311 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 6 01:41:27.111330 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 6 01:41:27.111343 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 6 01:41:27.111355 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 6 01:41:27.111368 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 6 01:41:27.111381 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 6 01:41:27.111393 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 6 01:41:27.111406 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 6 01:41:27.111419 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 6 01:41:27.111432 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 6 01:41:27.111450 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 6 01:41:27.111462 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 6 01:41:27.111474 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 6 01:41:27.111488 kernel: iommu: Default domain type: Translated Mar 6 01:41:27.111501 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 6 01:41:27.111513 kernel: efivars: Registered efivars operations Mar 6 01:41:27.111579 kernel: PCI: Using ACPI for IRQ routing Mar 6 01:41:27.111594 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 6 01:41:27.111606 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Mar 6 01:41:27.111624 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Mar 6 01:41:27.111636 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Mar 6 01:41:27.111650 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Mar 6 01:41:27.111888 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 6 01:41:27.112090 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 6 01:41:27.112293 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 6 01:41:27.112311 kernel: vgaarb: loaded Mar 6 01:41:27.112325 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 6 01:41:27.112343 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 6 01:41:27.112356 kernel: clocksource: Switched to clocksource kvm-clock Mar 6 01:41:27.112369 kernel: VFS: Disk quotas dquot_6.6.0 Mar 6 01:41:27.112381 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 6 01:41:27.112394 kernel: pnp: PnP ACPI init Mar 6 01:41:27.112664 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 6 01:41:27.112685 kernel: pnp: PnP ACPI: found 6 devices Mar 6 01:41:27.112698 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 6 01:41:27.112756 kernel: NET: Registered PF_INET protocol family Mar 6 01:41:27.112770 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 6 01:41:27.112784 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 6 01:41:27.112796 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 6 01:41:27.112809 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 6 01:41:27.112822 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 6 01:41:27.112836 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 6 01:41:27.112848 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 6 01:41:27.112861 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 6 01:41:27.112878 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 6 01:41:27.112891 kernel: NET: Registered PF_XDP protocol family Mar 6 01:41:27.113096 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Mar 6 01:41:27.113299 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Mar 6 01:41:27.113486 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 6 01:41:27.113778 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 6 01:41:27.113966 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 6 01:41:27.114150 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 6 01:41:27.114341 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 6 01:41:27.114660 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Mar 6 01:41:27.114682 kernel: PCI: CLS 0 bytes, default 64 Mar 6 01:41:27.114695 kernel: Initialise system trusted keyrings Mar 6 01:41:27.114709 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 6 01:41:27.114757 kernel: Key type asymmetric registered Mar 6 01:41:27.114770 kernel: Asymmetric key parser 'x509' registered Mar 6 01:41:27.114782 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 6 01:41:27.114796 kernel: io scheduler mq-deadline registered Mar 6 01:41:27.114814 kernel: io scheduler kyber registered Mar 6 01:41:27.114826 kernel: io scheduler bfq registered Mar 6 01:41:27.114843 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 6 01:41:27.114857 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 6 01:41:27.114870 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 6 01:41:27.114883 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 6 01:41:27.114895 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 6 01:41:27.114908 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 6 01:41:27.114920 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 6 01:41:27.114938 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 6 01:41:27.114951 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 6 01:41:27.115167 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 6 01:41:27.115186 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 6 01:41:27.115372 kernel: rtc_cmos 00:04: registered as rtc0 Mar 6 01:41:27.115617 kernel: rtc_cmos 00:04: setting system clock to 2026-03-06T01:41:26 UTC (1772761286) Mar 6 01:41:27.115846 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 6 01:41:27.115865 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 6 01:41:27.115883 kernel: efifb: probing for efifb Mar 6 01:41:27.115896 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Mar 6 01:41:27.115909 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Mar 6 01:41:27.115922 kernel: efifb: scrolling: redraw Mar 6 01:41:27.115934 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Mar 6 01:41:27.115947 kernel: Console: switching to colour frame buffer device 100x37 Mar 6 01:41:27.115960 kernel: fb0: EFI VGA frame buffer device Mar 6 01:41:27.115973 kernel: pstore: Using crash dump compression: deflate Mar 6 01:41:27.115985 kernel: pstore: Registered efi_pstore as persistent store backend Mar 6 01:41:27.116002 kernel: NET: Registered PF_INET6 protocol family Mar 6 01:41:27.116015 kernel: Segment Routing with IPv6 Mar 6 01:41:27.116028 kernel: In-situ OAM (IOAM) with IPv6 Mar 6 01:41:27.116039 kernel: NET: Registered PF_PACKET protocol family Mar 6 01:41:27.116052 kernel: Key type dns_resolver registered Mar 6 01:41:27.116065 kernel: IPI shorthand broadcast: enabled Mar 6 01:41:27.116108 kernel: sched_clock: Marking stable (1220029728, 496641659)->(1973559355, -256887968) Mar 6 01:41:27.116126 kernel: registered taskstats version 1 Mar 6 01:41:27.116139 kernel: Loading compiled-in X.509 certificates Mar 6 01:41:27.116156 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 6d88f6264570591a57b3c9c1e1c99fca6c68b8ca' Mar 6 01:41:27.116170 kernel: Key type .fscrypt registered Mar 6 01:41:27.116183 kernel: Key type fscrypt-provisioning registered Mar 6 01:41:27.116196 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 6 01:41:27.116209 kernel: ima: Allocated hash algorithm: sha1 Mar 6 01:41:27.116222 kernel: ima: No architecture policies found Mar 6 01:41:27.116236 kernel: clk: Disabling unused clocks Mar 6 01:41:27.116248 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 6 01:41:27.116262 kernel: Write protecting the kernel read-only data: 36864k Mar 6 01:41:27.116279 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 6 01:41:27.116297 kernel: Run /init as init process Mar 6 01:41:27.116310 kernel: with arguments: Mar 6 01:41:27.116323 kernel: /init Mar 6 01:41:27.116336 kernel: with environment: Mar 6 01:41:27.116348 kernel: HOME=/ Mar 6 01:41:27.116361 kernel: TERM=linux Mar 6 01:41:27.116377 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 6 01:41:27.116398 systemd[1]: Detected virtualization kvm. Mar 6 01:41:27.116413 systemd[1]: Detected architecture x86-64. Mar 6 01:41:27.116427 systemd[1]: Running in initrd. Mar 6 01:41:27.116440 systemd[1]: No hostname configured, using default hostname. Mar 6 01:41:27.116453 systemd[1]: Hostname set to . Mar 6 01:41:27.116468 systemd[1]: Initializing machine ID from VM UUID. Mar 6 01:41:27.116482 systemd[1]: Queued start job for default target initrd.target. Mar 6 01:41:27.116500 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 6 01:41:27.116514 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 6 01:41:27.116581 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 6 01:41:27.116596 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 6 01:41:27.116610 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 6 01:41:27.116634 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 6 01:41:27.116650 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 6 01:41:27.116665 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 6 01:41:27.116679 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 6 01:41:27.116694 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 6 01:41:27.116708 systemd[1]: Reached target paths.target - Path Units. Mar 6 01:41:27.116755 systemd[1]: Reached target slices.target - Slice Units. Mar 6 01:41:27.116775 systemd[1]: Reached target swap.target - Swaps. Mar 6 01:41:27.116788 systemd[1]: Reached target timers.target - Timer Units. Mar 6 01:41:27.116802 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 6 01:41:27.116816 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 6 01:41:27.116831 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 6 01:41:27.116845 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 6 01:41:27.116860 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 6 01:41:27.116873 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 6 01:41:27.116888 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 6 01:41:27.116906 systemd[1]: Reached target sockets.target - Socket Units. Mar 6 01:41:27.116921 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 6 01:41:27.116936 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 6 01:41:27.116950 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 6 01:41:27.116964 systemd[1]: Starting systemd-fsck-usr.service... Mar 6 01:41:27.116977 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 6 01:41:27.116991 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 6 01:41:27.117005 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 01:41:27.117024 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 6 01:41:27.117038 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 6 01:41:27.117081 systemd-journald[193]: Collecting audit messages is disabled. Mar 6 01:41:27.117112 systemd[1]: Finished systemd-fsck-usr.service. Mar 6 01:41:27.117132 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 6 01:41:27.117147 systemd-journald[193]: Journal started Mar 6 01:41:27.117174 systemd-journald[193]: Runtime Journal (/run/log/journal/ff4eed2e478a4142bbdbf78df71d50f3) is 6.0M, max 48.3M, 42.2M free. Mar 6 01:41:27.101624 systemd-modules-load[194]: Inserted module 'overlay' Mar 6 01:41:27.120589 systemd[1]: Started systemd-journald.service - Journal Service. Mar 6 01:41:27.127675 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 01:41:27.132258 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 6 01:41:27.146833 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 6 01:41:27.148144 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 6 01:41:27.160189 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 6 01:41:27.185598 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 6 01:41:27.186975 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 6 01:41:27.195373 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 6 01:41:27.209354 systemd-modules-load[194]: Inserted module 'br_netfilter' Mar 6 01:41:27.209599 kernel: Bridge firewalling registered Mar 6 01:41:27.210892 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 6 01:41:27.226044 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 6 01:41:27.233470 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 6 01:41:27.234933 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 6 01:41:27.243173 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 6 01:41:27.260704 dracut-cmdline[229]: dracut-dracut-053 Mar 6 01:41:27.250828 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 6 01:41:27.267062 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a6bcd99e714cc2f1b95dc0d61d9d762252de26a434f12074c16f59200c97ba9c Mar 6 01:41:27.310006 systemd-resolved[236]: Positive Trust Anchors: Mar 6 01:41:27.310037 systemd-resolved[236]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 6 01:41:27.310063 systemd-resolved[236]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 6 01:41:27.312409 systemd-resolved[236]: Defaulting to hostname 'linux'. Mar 6 01:41:27.313611 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 6 01:41:27.317582 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 6 01:41:27.404602 kernel: SCSI subsystem initialized Mar 6 01:41:27.415591 kernel: Loading iSCSI transport class v2.0-870. Mar 6 01:41:27.426594 kernel: iscsi: registered transport (tcp) Mar 6 01:41:27.448574 kernel: iscsi: registered transport (qla4xxx) Mar 6 01:41:27.448608 kernel: QLogic iSCSI HBA Driver Mar 6 01:41:27.502911 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 6 01:41:27.516810 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 6 01:41:27.547308 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 6 01:41:27.547360 kernel: device-mapper: uevent: version 1.0.3 Mar 6 01:41:27.550037 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 6 01:41:27.598595 kernel: raid6: avx2x4 gen() 27960 MB/s Mar 6 01:41:27.616588 kernel: raid6: avx2x2 gen() 30158 MB/s Mar 6 01:41:27.635606 kernel: raid6: avx2x1 gen() 25829 MB/s Mar 6 01:41:27.635632 kernel: raid6: using algorithm avx2x2 gen() 30158 MB/s Mar 6 01:41:27.655609 kernel: raid6: .... xor() 30262 MB/s, rmw enabled Mar 6 01:41:27.655647 kernel: raid6: using avx2x2 recovery algorithm Mar 6 01:41:27.675596 kernel: xor: automatically using best checksumming function avx Mar 6 01:41:27.816591 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 6 01:41:27.830465 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 6 01:41:27.843886 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 6 01:41:27.863403 systemd-udevd[415]: Using default interface naming scheme 'v255'. Mar 6 01:41:27.868260 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 6 01:41:27.884784 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 6 01:41:27.898649 dracut-pre-trigger[427]: rd.md=0: removing MD RAID activation Mar 6 01:41:27.935237 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 6 01:41:27.961867 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 6 01:41:28.036703 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 6 01:41:28.047826 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 6 01:41:28.063898 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 6 01:41:28.071208 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 6 01:41:28.075006 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 6 01:41:28.084609 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 6 01:41:28.102830 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 6 01:41:28.113295 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 6 01:41:28.113568 kernel: cryptd: max_cpu_qlen set to 1000 Mar 6 01:41:28.126409 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 6 01:41:28.128422 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 6 01:41:28.141224 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 6 01:41:28.141296 kernel: GPT:9289727 != 19775487 Mar 6 01:41:28.141314 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 6 01:41:28.141596 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 6 01:41:28.158636 kernel: GPT:9289727 != 19775487 Mar 6 01:41:28.158751 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 6 01:41:28.158773 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 6 01:41:28.141851 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 6 01:41:28.164562 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 6 01:41:28.178852 kernel: libata version 3.00 loaded. Mar 6 01:41:28.173564 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 6 01:41:28.189389 kernel: AVX2 version of gcm_enc/dec engaged. Mar 6 01:41:28.189417 kernel: AES CTR mode by8 optimization enabled Mar 6 01:41:28.173809 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 01:41:28.204708 kernel: ahci 0000:00:1f.2: version 3.0 Mar 6 01:41:28.204983 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 6 01:41:28.183326 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 01:41:28.221393 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 6 01:41:28.221670 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (476) Mar 6 01:41:28.221683 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 6 01:41:28.221874 kernel: BTRFS: device fsid eccec0b1-0068-4620-ab61-f332f16460fa devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (472) Mar 6 01:41:28.206928 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 01:41:28.234326 kernel: scsi host0: ahci Mar 6 01:41:28.234567 kernel: scsi host1: ahci Mar 6 01:41:28.234810 kernel: scsi host2: ahci Mar 6 01:41:28.234974 kernel: scsi host3: ahci Mar 6 01:41:28.235125 kernel: scsi host4: ahci Mar 6 01:41:28.239273 kernel: scsi host5: ahci Mar 6 01:41:28.239450 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Mar 6 01:41:28.239462 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Mar 6 01:41:28.241588 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Mar 6 01:41:28.241617 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Mar 6 01:41:28.241627 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Mar 6 01:41:28.241636 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Mar 6 01:41:28.245344 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 6 01:41:28.264287 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 6 01:41:28.272959 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 6 01:41:28.278026 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 6 01:41:28.287780 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 6 01:41:28.304681 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 6 01:41:28.318633 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 6 01:41:28.307816 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 6 01:41:28.327704 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 6 01:41:28.307891 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 01:41:28.334612 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 6 01:41:28.334628 disk-uuid[557]: Primary Header is updated. Mar 6 01:41:28.334628 disk-uuid[557]: Secondary Entries is updated. Mar 6 01:41:28.334628 disk-uuid[557]: Secondary Header is updated. Mar 6 01:41:28.318703 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 01:41:28.328933 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 01:41:28.364351 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 01:41:28.392487 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 6 01:41:28.420929 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 6 01:41:28.556613 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 6 01:41:28.556687 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 6 01:41:28.559614 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 6 01:41:28.564601 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 6 01:41:28.564626 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 6 01:41:28.568593 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 6 01:41:28.568616 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 6 01:41:28.572195 kernel: ata3.00: applying bridge limits Mar 6 01:41:28.572227 kernel: ata3.00: configured for UDMA/100 Mar 6 01:41:28.574599 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 6 01:41:28.631694 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 6 01:41:28.632098 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 6 01:41:28.645628 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 6 01:41:29.333616 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 6 01:41:29.334213 disk-uuid[558]: The operation has completed successfully. Mar 6 01:41:29.370624 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 6 01:41:29.370800 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 6 01:41:29.393877 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 6 01:41:29.403094 sh[602]: Success Mar 6 01:41:29.414639 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 6 01:41:29.460430 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 6 01:41:29.484712 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 6 01:41:29.490215 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 6 01:41:29.506606 kernel: BTRFS info (device dm-0): first mount of filesystem eccec0b1-0068-4620-ab61-f332f16460fa Mar 6 01:41:29.506639 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 6 01:41:29.506650 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 6 01:41:29.509639 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 6 01:41:29.511687 kernel: BTRFS info (device dm-0): using free space tree Mar 6 01:41:29.521467 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 6 01:41:29.526416 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 6 01:41:29.542769 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 6 01:41:29.546107 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 6 01:41:29.570108 kernel: BTRFS info (device vda6): first mount of filesystem dcd455b6-671f-4d9f-a5ce-de07977c88a5 Mar 6 01:41:29.570152 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 6 01:41:29.570173 kernel: BTRFS info (device vda6): using free space tree Mar 6 01:41:29.578713 kernel: BTRFS info (device vda6): auto enabling async discard Mar 6 01:41:29.589858 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 6 01:41:29.595690 kernel: BTRFS info (device vda6): last unmount of filesystem dcd455b6-671f-4d9f-a5ce-de07977c88a5 Mar 6 01:41:29.602445 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 6 01:41:29.609835 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 6 01:41:29.662948 ignition[705]: Ignition 2.19.0 Mar 6 01:41:29.662980 ignition[705]: Stage: fetch-offline Mar 6 01:41:29.663016 ignition[705]: no configs at "/usr/lib/ignition/base.d" Mar 6 01:41:29.663026 ignition[705]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 01:41:29.663122 ignition[705]: parsed url from cmdline: "" Mar 6 01:41:29.663126 ignition[705]: no config URL provided Mar 6 01:41:29.663132 ignition[705]: reading system config file "/usr/lib/ignition/user.ign" Mar 6 01:41:29.663141 ignition[705]: no config at "/usr/lib/ignition/user.ign" Mar 6 01:41:29.663166 ignition[705]: op(1): [started] loading QEMU firmware config module Mar 6 01:41:29.663171 ignition[705]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 6 01:41:29.675368 ignition[705]: op(1): [finished] loading QEMU firmware config module Mar 6 01:41:29.702680 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 6 01:41:29.720803 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 6 01:41:29.750392 systemd-networkd[790]: lo: Link UP Mar 6 01:41:29.750436 systemd-networkd[790]: lo: Gained carrier Mar 6 01:41:29.752501 systemd-networkd[790]: Enumeration completed Mar 6 01:41:29.752847 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 6 01:41:29.753611 systemd-networkd[790]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 01:41:29.753617 systemd-networkd[790]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 6 01:41:29.755133 systemd-networkd[790]: eth0: Link UP Mar 6 01:41:29.755139 systemd-networkd[790]: eth0: Gained carrier Mar 6 01:41:29.755150 systemd-networkd[790]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 01:41:29.755426 systemd[1]: Reached target network.target - Network. Mar 6 01:41:29.793617 systemd-networkd[790]: eth0: DHCPv4 address 10.0.0.92/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 6 01:41:29.812098 systemd-resolved[236]: Detected conflict on linux IN A 10.0.0.92 Mar 6 01:41:29.812140 systemd-resolved[236]: Hostname conflict, changing published hostname from 'linux' to 'linux11'. Mar 6 01:41:29.906572 ignition[705]: parsing config with SHA512: 8c9f30d2288c4e7292cd296d9785b5f815d6b1263ab1b52c44440c70f6fa20a6ea3344f8ce3ce4703749865ba110c447e042764316f535e1f5c24bbcbfa2e4bf Mar 6 01:41:29.912472 unknown[705]: fetched base config from "system" Mar 6 01:41:29.912495 unknown[705]: fetched user config from "qemu" Mar 6 01:41:29.912927 ignition[705]: fetch-offline: fetch-offline passed Mar 6 01:41:29.915178 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 6 01:41:29.912991 ignition[705]: Ignition finished successfully Mar 6 01:41:29.920772 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 6 01:41:29.935859 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 6 01:41:29.952888 ignition[794]: Ignition 2.19.0 Mar 6 01:41:29.952915 ignition[794]: Stage: kargs Mar 6 01:41:29.953133 ignition[794]: no configs at "/usr/lib/ignition/base.d" Mar 6 01:41:29.953146 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 01:41:29.962282 ignition[794]: kargs: kargs passed Mar 6 01:41:29.962360 ignition[794]: Ignition finished successfully Mar 6 01:41:29.967705 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 6 01:41:29.982814 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 6 01:41:29.997143 ignition[802]: Ignition 2.19.0 Mar 6 01:41:29.997178 ignition[802]: Stage: disks Mar 6 01:41:29.999598 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 6 01:41:29.997381 ignition[802]: no configs at "/usr/lib/ignition/base.d" Mar 6 01:41:30.005805 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 6 01:41:29.997393 ignition[802]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 01:41:30.012834 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 6 01:41:29.998083 ignition[802]: disks: disks passed Mar 6 01:41:30.017345 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 6 01:41:29.998135 ignition[802]: Ignition finished successfully Mar 6 01:41:30.024855 systemd[1]: Reached target sysinit.target - System Initialization. Mar 6 01:41:30.027930 systemd[1]: Reached target basic.target - Basic System. Mar 6 01:41:30.045899 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 6 01:41:30.064605 systemd-fsck[813]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 6 01:41:30.068298 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 6 01:41:30.074196 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 6 01:41:30.176683 kernel: EXT4-fs (vda9): mounted filesystem 6fb83788-0471-4e89-b45f-3a7586a627a9 r/w with ordered data mode. Quota mode: none. Mar 6 01:41:30.176968 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 6 01:41:30.180044 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 6 01:41:30.198769 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 6 01:41:30.219334 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (821) Mar 6 01:41:30.219377 kernel: BTRFS info (device vda6): first mount of filesystem dcd455b6-671f-4d9f-a5ce-de07977c88a5 Mar 6 01:41:30.219393 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 6 01:41:30.219408 kernel: BTRFS info (device vda6): using free space tree Mar 6 01:41:30.202480 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 6 01:41:30.222011 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 6 01:41:30.238705 kernel: BTRFS info (device vda6): auto enabling async discard Mar 6 01:41:30.222064 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 6 01:41:30.222108 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 6 01:41:30.233053 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 6 01:41:30.238948 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 6 01:41:30.275880 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 6 01:41:30.324044 initrd-setup-root[845]: cut: /sysroot/etc/passwd: No such file or directory Mar 6 01:41:30.334901 initrd-setup-root[852]: cut: /sysroot/etc/group: No such file or directory Mar 6 01:41:30.343129 initrd-setup-root[859]: cut: /sysroot/etc/shadow: No such file or directory Mar 6 01:41:30.350480 initrd-setup-root[866]: cut: /sysroot/etc/gshadow: No such file or directory Mar 6 01:41:30.478134 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 6 01:41:30.492800 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 6 01:41:30.501613 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 6 01:41:30.513623 kernel: BTRFS info (device vda6): last unmount of filesystem dcd455b6-671f-4d9f-a5ce-de07977c88a5 Mar 6 01:41:30.507706 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 6 01:41:30.538017 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 6 01:41:30.550249 ignition[935]: INFO : Ignition 2.19.0 Mar 6 01:41:30.550249 ignition[935]: INFO : Stage: mount Mar 6 01:41:30.555768 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 6 01:41:30.555768 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 01:41:30.555768 ignition[935]: INFO : mount: mount passed Mar 6 01:41:30.555768 ignition[935]: INFO : Ignition finished successfully Mar 6 01:41:30.572508 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 6 01:41:30.583921 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 6 01:41:30.594997 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 6 01:41:30.616806 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (948) Mar 6 01:41:30.616846 kernel: BTRFS info (device vda6): first mount of filesystem dcd455b6-671f-4d9f-a5ce-de07977c88a5 Mar 6 01:41:30.616858 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 6 01:41:30.621974 kernel: BTRFS info (device vda6): using free space tree Mar 6 01:41:30.628628 kernel: BTRFS info (device vda6): auto enabling async discard Mar 6 01:41:30.631066 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 6 01:41:30.670751 ignition[965]: INFO : Ignition 2.19.0 Mar 6 01:41:30.670751 ignition[965]: INFO : Stage: files Mar 6 01:41:30.676261 ignition[965]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 6 01:41:30.676261 ignition[965]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 01:41:30.676261 ignition[965]: DEBUG : files: compiled without relabeling support, skipping Mar 6 01:41:30.687666 ignition[965]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 6 01:41:30.687666 ignition[965]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 6 01:41:30.687666 ignition[965]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 6 01:41:30.687666 ignition[965]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 6 01:41:30.706500 ignition[965]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 6 01:41:30.706500 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 6 01:41:30.706500 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 6 01:41:30.688358 unknown[965]: wrote ssh authorized keys file for user: core Mar 6 01:41:30.756179 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 6 01:41:30.837021 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 6 01:41:30.837021 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 6 01:41:30.847179 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 6 01:41:30.851875 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 6 01:41:30.856934 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 6 01:41:30.861622 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 6 01:41:30.867939 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 6 01:41:30.867939 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 6 01:41:30.867939 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 6 01:41:30.867939 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 6 01:41:30.867939 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 6 01:41:30.867939 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 6 01:41:30.867939 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 6 01:41:30.914784 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 6 01:41:30.920965 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Mar 6 01:41:31.049843 systemd-networkd[790]: eth0: Gained IPv6LL Mar 6 01:41:31.210032 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 6 01:41:32.113598 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 6 01:41:32.123103 ignition[965]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 6 01:41:32.123103 ignition[965]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 6 01:41:32.123103 ignition[965]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 6 01:41:32.123103 ignition[965]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 6 01:41:32.123103 ignition[965]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 6 01:41:32.123103 ignition[965]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 6 01:41:32.123103 ignition[965]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 6 01:41:32.123103 ignition[965]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 6 01:41:32.123103 ignition[965]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 6 01:41:32.193092 ignition[965]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 6 01:41:32.203436 ignition[965]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 6 01:41:32.207883 ignition[965]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 6 01:41:32.207883 ignition[965]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 6 01:41:32.207883 ignition[965]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 6 01:41:32.207883 ignition[965]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 6 01:41:32.207883 ignition[965]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 6 01:41:32.207883 ignition[965]: INFO : files: files passed Mar 6 01:41:32.207883 ignition[965]: INFO : Ignition finished successfully Mar 6 01:41:32.217511 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 6 01:41:32.241067 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 6 01:41:32.244616 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 6 01:41:32.264798 initrd-setup-root-after-ignition[993]: grep: /sysroot/oem/oem-release: No such file or directory Mar 6 01:41:32.269909 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 6 01:41:32.269909 initrd-setup-root-after-ignition[995]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 6 01:41:32.279340 initrd-setup-root-after-ignition[999]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 6 01:41:32.279381 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 6 01:41:32.287321 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 6 01:41:32.295131 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 6 01:41:32.304876 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 6 01:41:32.321827 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 6 01:41:32.352097 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 6 01:41:32.354954 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 6 01:41:32.362358 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 6 01:41:32.368073 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 6 01:41:32.376868 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 6 01:41:32.390725 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 6 01:41:32.410954 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 6 01:41:32.425808 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 6 01:41:32.438598 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 6 01:41:32.445491 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 6 01:41:32.453675 systemd[1]: Stopped target timers.target - Timer Units. Mar 6 01:41:32.459993 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 6 01:41:32.463137 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 6 01:41:32.471444 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 6 01:41:32.478388 systemd[1]: Stopped target basic.target - Basic System. Mar 6 01:41:32.484658 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 6 01:41:32.492072 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 6 01:41:32.499247 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 6 01:41:32.505363 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 6 01:41:32.511222 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 6 01:41:32.518232 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 6 01:41:32.523984 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 6 01:41:32.529512 systemd[1]: Stopped target swap.target - Swaps. Mar 6 01:41:32.534407 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 6 01:41:32.537181 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 6 01:41:32.543410 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 6 01:41:32.549473 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 6 01:41:32.556341 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 6 01:41:32.558989 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 6 01:41:32.566117 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 6 01:41:32.568873 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 6 01:41:32.575053 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 6 01:41:32.578008 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 6 01:41:32.585009 systemd[1]: Stopped target paths.target - Path Units. Mar 6 01:41:32.592288 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 6 01:41:32.601692 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 6 01:41:32.609962 systemd[1]: Stopped target slices.target - Slice Units. Mar 6 01:41:32.617699 systemd[1]: Stopped target sockets.target - Socket Units. Mar 6 01:41:32.625184 systemd[1]: iscsid.socket: Deactivated successfully. Mar 6 01:41:32.625337 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 6 01:41:32.634078 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 6 01:41:32.636623 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 6 01:41:32.642491 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 6 01:41:32.645816 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 6 01:41:32.653051 systemd[1]: ignition-files.service: Deactivated successfully. Mar 6 01:41:32.655793 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 6 01:41:32.674838 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 6 01:41:32.681774 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 6 01:41:32.684823 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 6 01:41:32.690827 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 6 01:41:32.698647 ignition[1020]: INFO : Ignition 2.19.0 Mar 6 01:41:32.698647 ignition[1020]: INFO : Stage: umount Mar 6 01:41:32.698647 ignition[1020]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 6 01:41:32.698647 ignition[1020]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 01:41:32.730052 ignition[1020]: INFO : umount: umount passed Mar 6 01:41:32.730052 ignition[1020]: INFO : Ignition finished successfully Mar 6 01:41:32.698854 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 6 01:41:32.699025 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 6 01:41:32.708938 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 6 01:41:32.709102 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 6 01:41:32.720639 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 6 01:41:32.723229 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 6 01:41:32.723385 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 6 01:41:32.730832 systemd[1]: Stopped target network.target - Network. Mar 6 01:41:32.736217 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 6 01:41:32.736289 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 6 01:41:32.740219 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 6 01:41:32.740275 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 6 01:41:32.746626 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 6 01:41:32.746679 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 6 01:41:32.750487 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 6 01:41:32.750605 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 6 01:41:32.757784 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 6 01:41:32.764878 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 6 01:41:32.771609 systemd-networkd[790]: eth0: DHCPv6 lease lost Mar 6 01:41:32.776682 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 6 01:41:32.777128 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 6 01:41:32.783331 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 6 01:41:32.783611 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 6 01:41:32.794882 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 6 01:41:32.794967 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 6 01:41:32.824905 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 6 01:41:32.831818 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 6 01:41:32.831928 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 6 01:41:32.839048 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 6 01:41:32.839142 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 6 01:41:32.845886 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 6 01:41:32.845971 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 6 01:41:32.852655 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 6 01:41:32.852789 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 6 01:41:32.861951 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 6 01:41:32.871236 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 6 01:41:32.871429 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 6 01:41:32.886325 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 6 01:41:32.886626 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 6 01:41:32.892664 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 6 01:41:32.892800 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 6 01:41:32.897689 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 6 01:41:32.897803 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 6 01:41:32.903992 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 6 01:41:32.904082 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 6 01:41:32.910403 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 6 01:41:32.910486 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 6 01:41:32.916682 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 6 01:41:32.916817 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 6 01:41:32.924397 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 6 01:41:32.924471 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 6 01:41:32.953987 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 6 01:41:32.959030 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 6 01:41:32.959111 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 6 01:41:32.962703 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 6 01:41:32.962823 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 6 01:41:32.968601 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 6 01:41:32.968662 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 6 01:41:33.081944 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Mar 6 01:41:32.974419 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 6 01:41:32.974474 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 01:41:32.978127 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 6 01:41:32.978266 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 6 01:41:32.985574 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 6 01:41:32.985705 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 6 01:41:32.994311 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 6 01:41:33.019076 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 6 01:41:33.030715 systemd[1]: Switching root. Mar 6 01:41:33.117296 systemd-journald[193]: Journal stopped Mar 6 01:41:34.736702 kernel: SELinux: policy capability network_peer_controls=1 Mar 6 01:41:34.736829 kernel: SELinux: policy capability open_perms=1 Mar 6 01:41:34.736849 kernel: SELinux: policy capability extended_socket_class=1 Mar 6 01:41:34.736865 kernel: SELinux: policy capability always_check_network=0 Mar 6 01:41:34.736881 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 6 01:41:34.736903 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 6 01:41:34.736919 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 6 01:41:34.736944 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 6 01:41:34.736960 kernel: audit: type=1403 audit(1772761293.333:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 6 01:41:34.736983 systemd[1]: Successfully loaded SELinux policy in 80.893ms. Mar 6 01:41:34.737015 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 27.465ms. Mar 6 01:41:34.737032 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 6 01:41:34.737049 systemd[1]: Detected virtualization kvm. Mar 6 01:41:34.737065 systemd[1]: Detected architecture x86-64. Mar 6 01:41:34.737088 systemd[1]: Detected first boot. Mar 6 01:41:34.737108 systemd[1]: Initializing machine ID from VM UUID. Mar 6 01:41:34.737125 zram_generator::config[1066]: No configuration found. Mar 6 01:41:34.737142 systemd[1]: Populated /etc with preset unit settings. Mar 6 01:41:34.737159 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 6 01:41:34.737177 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 6 01:41:34.737194 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 6 01:41:34.737211 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 6 01:41:34.737229 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 6 01:41:34.737248 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 6 01:41:34.737265 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 6 01:41:34.737281 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 6 01:41:34.737298 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 6 01:41:34.737315 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 6 01:41:34.737332 systemd[1]: Created slice user.slice - User and Session Slice. Mar 6 01:41:34.737349 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 6 01:41:34.737366 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 6 01:41:34.737385 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 6 01:41:34.737405 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 6 01:41:34.737422 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 6 01:41:34.737439 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 6 01:41:34.737455 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 6 01:41:34.737472 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 6 01:41:34.737488 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 6 01:41:34.737504 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 6 01:41:34.737566 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 6 01:41:34.737585 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 6 01:41:34.737606 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 6 01:41:34.737623 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 6 01:41:34.737640 systemd[1]: Reached target slices.target - Slice Units. Mar 6 01:41:34.737656 systemd[1]: Reached target swap.target - Swaps. Mar 6 01:41:34.737673 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 6 01:41:34.737689 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 6 01:41:34.737706 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 6 01:41:34.737723 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 6 01:41:34.737779 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 6 01:41:34.737801 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 6 01:41:34.737818 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 6 01:41:34.737837 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 6 01:41:34.737853 systemd[1]: Mounting media.mount - External Media Directory... Mar 6 01:41:34.737870 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 01:41:34.737887 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 6 01:41:34.737904 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 6 01:41:34.737921 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 6 01:41:34.737943 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 6 01:41:34.737960 systemd[1]: Reached target machines.target - Containers. Mar 6 01:41:34.737977 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 6 01:41:34.737994 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 6 01:41:34.738011 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 6 01:41:34.738028 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 6 01:41:34.738045 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 6 01:41:34.738061 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 6 01:41:34.738081 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 6 01:41:34.738099 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 6 01:41:34.738115 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 6 01:41:34.738132 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 6 01:41:34.738149 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 6 01:41:34.738166 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 6 01:41:34.738183 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 6 01:41:34.738201 systemd[1]: Stopped systemd-fsck-usr.service. Mar 6 01:41:34.738218 kernel: fuse: init (API version 7.39) Mar 6 01:41:34.738237 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 6 01:41:34.738253 kernel: ACPI: bus type drm_connector registered Mar 6 01:41:34.738269 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 6 01:41:34.738285 kernel: loop: module loaded Mar 6 01:41:34.738302 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 6 01:41:34.738319 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 6 01:41:34.738336 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 6 01:41:34.738352 systemd[1]: verity-setup.service: Deactivated successfully. Mar 6 01:41:34.738394 systemd-journald[1150]: Collecting audit messages is disabled. Mar 6 01:41:34.738427 systemd[1]: Stopped verity-setup.service. Mar 6 01:41:34.738445 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 01:41:34.738462 systemd-journald[1150]: Journal started Mar 6 01:41:34.738490 systemd-journald[1150]: Runtime Journal (/run/log/journal/ff4eed2e478a4142bbdbf78df71d50f3) is 6.0M, max 48.3M, 42.2M free. Mar 6 01:41:34.158321 systemd[1]: Queued start job for default target multi-user.target. Mar 6 01:41:34.182020 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 6 01:41:34.183076 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 6 01:41:34.183779 systemd[1]: systemd-journald.service: Consumed 1.397s CPU time. Mar 6 01:41:34.749440 systemd[1]: Started systemd-journald.service - Journal Service. Mar 6 01:41:34.750334 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 6 01:41:34.754132 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 6 01:41:34.758196 systemd[1]: Mounted media.mount - External Media Directory. Mar 6 01:41:34.761707 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 6 01:41:34.765721 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 6 01:41:34.770109 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 6 01:41:34.774023 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 6 01:41:34.778877 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 6 01:41:34.783896 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 6 01:41:34.784206 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 6 01:41:34.788668 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 6 01:41:34.788905 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 6 01:41:34.793473 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 6 01:41:34.793855 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 6 01:41:34.798179 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 6 01:41:34.798428 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 6 01:41:34.803517 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 6 01:41:34.803920 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 6 01:41:34.808360 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 6 01:41:34.808786 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 6 01:41:34.813136 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 6 01:41:34.817674 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 6 01:41:34.822325 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 6 01:41:34.843861 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 6 01:41:34.859660 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 6 01:41:34.865222 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 6 01:41:34.869177 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 6 01:41:34.869253 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 6 01:41:34.874287 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 6 01:41:34.880720 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 6 01:41:34.887050 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 6 01:41:34.891179 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 6 01:41:34.893241 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 6 01:41:34.899404 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 6 01:41:34.903892 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 6 01:41:34.906047 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 6 01:41:34.910479 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 6 01:41:34.966041 systemd-journald[1150]: Time spent on flushing to /var/log/journal/ff4eed2e478a4142bbdbf78df71d50f3 is 54.891ms for 987 entries. Mar 6 01:41:34.966041 systemd-journald[1150]: System Journal (/var/log/journal/ff4eed2e478a4142bbdbf78df71d50f3) is 8.0M, max 195.6M, 187.6M free. Mar 6 01:41:35.396974 systemd-journald[1150]: Received client request to flush runtime journal. Mar 6 01:41:35.397100 kernel: loop0: detected capacity change from 0 to 140768 Mar 6 01:41:35.397178 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 6 01:41:34.962802 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 6 01:41:35.111343 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 6 01:41:35.117646 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 6 01:41:35.124722 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 6 01:41:35.140801 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 6 01:41:35.146955 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 6 01:41:35.152344 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 6 01:41:35.158513 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 6 01:41:35.176600 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 6 01:41:35.194733 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 6 01:41:35.371649 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 6 01:41:35.377671 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 6 01:41:35.390985 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 6 01:41:35.392095 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 6 01:41:35.401929 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 6 01:41:35.403505 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Mar 6 01:41:35.403608 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Mar 6 01:41:35.428929 udevadm[1191]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 6 01:41:35.430314 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 6 01:41:35.444801 kernel: loop1: detected capacity change from 0 to 142488 Mar 6 01:41:35.452991 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 6 01:41:35.514486 kernel: loop2: detected capacity change from 0 to 217752 Mar 6 01:41:35.534988 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 6 01:41:35.727652 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 6 01:41:35.756015 kernel: loop3: detected capacity change from 0 to 140768 Mar 6 01:41:35.792334 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Mar 6 01:41:35.792972 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Mar 6 01:41:35.793565 kernel: loop4: detected capacity change from 0 to 142488 Mar 6 01:41:35.800431 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 6 01:41:35.816581 kernel: loop5: detected capacity change from 0 to 217752 Mar 6 01:41:35.827220 (sd-merge)[1205]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 6 01:41:35.828655 (sd-merge)[1205]: Merged extensions into '/usr'. Mar 6 01:41:35.953890 systemd[1]: Reloading requested from client PID 1180 ('systemd-sysext') (unit systemd-sysext.service)... Mar 6 01:41:35.953908 systemd[1]: Reloading... Mar 6 01:41:36.051880 zram_generator::config[1229]: No configuration found. Mar 6 01:41:36.484407 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 6 01:41:36.535976 ldconfig[1175]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 6 01:41:36.538879 systemd[1]: Reloading finished in 584 ms. Mar 6 01:41:36.593313 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 6 01:41:36.597052 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 6 01:41:36.618903 systemd[1]: Starting ensure-sysext.service... Mar 6 01:41:36.623509 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 6 01:41:36.631022 systemd[1]: Reloading requested from client PID 1270 ('systemctl') (unit ensure-sysext.service)... Mar 6 01:41:36.631639 systemd[1]: Reloading... Mar 6 01:41:36.850590 zram_generator::config[1296]: No configuration found. Mar 6 01:41:36.876307 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 6 01:41:36.876734 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 6 01:41:36.877736 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 6 01:41:36.878240 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Mar 6 01:41:36.878419 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Mar 6 01:41:36.882849 systemd-tmpfiles[1271]: Detected autofs mount point /boot during canonicalization of boot. Mar 6 01:41:36.882946 systemd-tmpfiles[1271]: Skipping /boot Mar 6 01:41:36.897427 systemd-tmpfiles[1271]: Detected autofs mount point /boot during canonicalization of boot. Mar 6 01:41:36.897461 systemd-tmpfiles[1271]: Skipping /boot Mar 6 01:41:36.960795 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 6 01:41:37.004034 systemd[1]: Reloading finished in 371 ms. Mar 6 01:41:37.023918 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 6 01:41:37.037662 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 6 01:41:37.050447 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 6 01:41:37.055627 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 6 01:41:37.060362 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 6 01:41:37.070671 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 6 01:41:37.082908 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 6 01:41:37.088325 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 6 01:41:37.093158 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 6 01:41:37.101274 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 01:41:37.102052 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 6 01:41:37.105860 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 6 01:41:37.115214 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 6 01:41:37.116021 augenrules[1359]: No rules Mar 6 01:41:37.118410 systemd-udevd[1347]: Using default interface naming scheme 'v255'. Mar 6 01:41:37.120885 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 6 01:41:37.124035 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 6 01:41:37.125671 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 6 01:41:37.135700 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 6 01:41:37.138607 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 01:41:37.139959 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 6 01:41:37.144029 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 6 01:41:37.144235 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 6 01:41:37.148215 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 6 01:41:37.148403 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 6 01:41:37.153070 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 6 01:41:37.153255 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 6 01:41:37.157433 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 6 01:41:37.161598 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 6 01:41:37.166803 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 6 01:41:37.497380 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 6 01:41:37.503092 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 6 01:41:37.520575 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1384) Mar 6 01:41:37.523483 systemd[1]: Finished ensure-sysext.service. Mar 6 01:41:37.550347 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 6 01:41:37.554698 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 01:41:37.554979 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 6 01:41:37.559578 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 6 01:41:37.568882 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 6 01:41:37.584480 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 6 01:41:37.591881 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 6 01:41:37.598940 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 6 01:41:37.600982 kernel: ACPI: button: Power Button [PWRF] Mar 6 01:41:37.604598 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 6 01:41:37.615444 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 6 01:41:37.623261 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 6 01:41:37.634131 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 6 01:41:37.634220 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 01:41:37.635307 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 6 01:41:37.635728 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 6 01:41:37.641680 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 6 01:41:37.642027 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 6 01:41:37.644181 systemd-resolved[1341]: Positive Trust Anchors: Mar 6 01:41:37.644685 systemd-resolved[1341]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 6 01:41:37.644735 systemd-resolved[1341]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 6 01:41:37.647851 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 6 01:41:37.648224 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 6 01:41:37.655845 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 6 01:41:37.656112 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 6 01:41:37.668668 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 6 01:41:37.674393 systemd-resolved[1341]: Defaulting to hostname 'linux'. Mar 6 01:41:37.685828 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 6 01:41:37.700883 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Mar 6 01:41:37.811925 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 6 01:41:37.812253 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 6 01:41:37.812655 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 6 01:41:37.819007 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 6 01:41:37.821797 kernel: mousedev: PS/2 mouse device common for all mice Mar 6 01:41:37.845237 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 6 01:41:37.861105 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 6 01:41:37.866843 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 6 01:41:37.867063 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 6 01:41:37.878914 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 01:41:37.883221 systemd-networkd[1413]: lo: Link UP Mar 6 01:41:37.883230 systemd-networkd[1413]: lo: Gained carrier Mar 6 01:41:37.886302 systemd-networkd[1413]: Enumeration completed Mar 6 01:41:37.886605 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 6 01:41:37.888612 systemd-networkd[1413]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 01:41:37.888620 systemd-networkd[1413]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 6 01:41:37.890031 systemd-networkd[1413]: eth0: Link UP Mar 6 01:41:37.890035 systemd-networkd[1413]: eth0: Gained carrier Mar 6 01:41:37.890047 systemd-networkd[1413]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 01:41:37.895434 systemd[1]: Reached target network.target - Network. Mar 6 01:41:37.905605 systemd-networkd[1413]: eth0: DHCPv4 address 10.0.0.92/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 6 01:41:37.971472 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 6 01:41:38.084941 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 6 01:41:38.089343 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 6 01:41:38.093912 systemd[1]: Reached target time-set.target - System Time Set. Mar 6 01:41:38.097290 systemd-timesyncd[1414]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 6 01:41:38.098605 systemd-timesyncd[1414]: Initial clock synchronization to Fri 2026-03-06 01:41:38.229063 UTC. Mar 6 01:41:38.217605 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 01:41:38.231185 kernel: kvm_amd: TSC scaling supported Mar 6 01:41:38.231274 kernel: kvm_amd: Nested Virtualization enabled Mar 6 01:41:38.231289 kernel: kvm_amd: Nested Paging enabled Mar 6 01:41:38.232579 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 6 01:41:38.234215 kernel: kvm_amd: PMU virtualization is disabled Mar 6 01:41:38.281601 kernel: EDAC MC: Ver: 3.0.0 Mar 6 01:41:38.314455 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 6 01:41:38.331170 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 6 01:41:38.350107 lvm[1436]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 6 01:41:38.386194 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 6 01:41:38.391148 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 6 01:41:38.394404 systemd[1]: Reached target sysinit.target - System Initialization. Mar 6 01:41:38.397954 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 6 01:41:38.402006 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 6 01:41:38.406459 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 6 01:41:38.409990 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 6 01:41:38.413684 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 6 01:41:38.417387 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 6 01:41:38.417450 systemd[1]: Reached target paths.target - Path Units. Mar 6 01:41:38.420265 systemd[1]: Reached target timers.target - Timer Units. Mar 6 01:41:38.424154 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 6 01:41:38.429805 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 6 01:41:38.438826 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 6 01:41:38.443855 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 6 01:41:38.447976 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 6 01:41:38.451236 systemd[1]: Reached target sockets.target - Socket Units. Mar 6 01:41:38.454101 systemd[1]: Reached target basic.target - Basic System. Mar 6 01:41:38.456867 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 6 01:41:38.456919 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 6 01:41:38.458326 systemd[1]: Starting containerd.service - containerd container runtime... Mar 6 01:41:38.459315 lvm[1440]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 6 01:41:38.463405 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 6 01:41:38.467510 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 6 01:41:38.471830 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 6 01:41:38.475981 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 6 01:41:38.480830 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 6 01:41:38.482919 jq[1443]: false Mar 6 01:41:38.492195 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 6 01:41:38.499697 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 6 01:41:38.506626 extend-filesystems[1444]: Found loop3 Mar 6 01:41:38.506626 extend-filesystems[1444]: Found loop4 Mar 6 01:41:38.506626 extend-filesystems[1444]: Found loop5 Mar 6 01:41:38.506626 extend-filesystems[1444]: Found sr0 Mar 6 01:41:38.506626 extend-filesystems[1444]: Found vda Mar 6 01:41:38.506626 extend-filesystems[1444]: Found vda1 Mar 6 01:41:38.506626 extend-filesystems[1444]: Found vda2 Mar 6 01:41:38.506626 extend-filesystems[1444]: Found vda3 Mar 6 01:41:38.506626 extend-filesystems[1444]: Found usr Mar 6 01:41:38.506626 extend-filesystems[1444]: Found vda4 Mar 6 01:41:38.506626 extend-filesystems[1444]: Found vda6 Mar 6 01:41:38.506626 extend-filesystems[1444]: Found vda7 Mar 6 01:41:38.506626 extend-filesystems[1444]: Found vda9 Mar 6 01:41:38.506626 extend-filesystems[1444]: Checking size of /dev/vda9 Mar 6 01:41:38.581258 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 6 01:41:38.581286 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1375) Mar 6 01:41:38.505359 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 6 01:41:38.519150 dbus-daemon[1442]: [system] SELinux support is enabled Mar 6 01:41:38.589245 extend-filesystems[1444]: Resized partition /dev/vda9 Mar 6 01:41:38.540455 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 6 01:41:38.596868 extend-filesystems[1460]: resize2fs 1.47.1 (20-May-2024) Mar 6 01:41:38.635869 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 6 01:41:38.567289 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 6 01:41:38.637132 extend-filesystems[1460]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 6 01:41:38.637132 extend-filesystems[1460]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 6 01:41:38.637132 extend-filesystems[1460]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 6 01:41:38.568162 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 6 01:41:38.641504 update_engine[1465]: I20260306 01:41:38.641021 1465 main.cc:92] Flatcar Update Engine starting Mar 6 01:41:38.650180 extend-filesystems[1444]: Resized filesystem in /dev/vda9 Mar 6 01:41:38.588830 systemd[1]: Starting update-engine.service - Update Engine... Mar 6 01:41:38.658213 update_engine[1465]: I20260306 01:41:38.643122 1465 update_check_scheduler.cc:74] Next update check in 4m36s Mar 6 01:41:38.658427 sshd_keygen[1464]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 6 01:41:38.599708 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 6 01:41:38.658808 jq[1466]: true Mar 6 01:41:38.604294 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 6 01:41:38.619912 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 6 01:41:38.629369 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 6 01:41:38.629925 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 6 01:41:38.630447 systemd[1]: motdgen.service: Deactivated successfully. Mar 6 01:41:38.630866 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 6 01:41:38.649419 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 6 01:41:38.649823 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 6 01:41:38.654842 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 6 01:41:38.655641 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 6 01:41:38.707924 (ntainerd)[1476]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 6 01:41:38.711188 systemd-logind[1456]: Watching system buttons on /dev/input/event1 (Power Button) Mar 6 01:41:38.711211 systemd-logind[1456]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 6 01:41:38.712700 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 6 01:41:38.713862 systemd-logind[1456]: New seat seat0. Mar 6 01:41:38.718046 jq[1475]: true Mar 6 01:41:38.716933 systemd[1]: Started systemd-logind.service - User Login Management. Mar 6 01:41:38.733727 tar[1469]: linux-amd64/LICENSE Mar 6 01:41:38.733727 tar[1469]: linux-amd64/helm Mar 6 01:41:38.741795 systemd[1]: Started update-engine.service - Update Engine. Mar 6 01:41:38.759281 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 6 01:41:38.764014 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 6 01:41:38.764179 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 6 01:41:38.770077 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 6 01:41:38.770197 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 6 01:41:38.781573 bash[1505]: Updated "/home/core/.ssh/authorized_keys" Mar 6 01:41:38.782000 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 6 01:41:38.789207 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 6 01:41:38.797218 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 6 01:41:38.804462 systemd[1]: issuegen.service: Deactivated successfully. Mar 6 01:41:38.804741 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 6 01:41:38.817955 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 6 01:41:38.997397 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 6 01:41:39.018995 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 6 01:41:39.022035 locksmithd[1507]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 6 01:41:39.026126 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 6 01:41:39.032837 systemd[1]: Reached target getty.target - Login Prompts. Mar 6 01:41:39.692938 systemd-networkd[1413]: eth0: Gained IPv6LL Mar 6 01:41:39.704110 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 6 01:41:39.710516 systemd[1]: Reached target network-online.target - Network is Online. Mar 6 01:41:39.728030 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 6 01:41:39.738768 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:41:39.745053 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 6 01:41:39.778032 containerd[1476]: time="2026-03-06T01:41:39.777750106Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 6 01:41:39.805053 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 6 01:41:39.812149 containerd[1476]: time="2026-03-06T01:41:39.811994568Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 6 01:41:39.819253 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 6 01:41:39.819489 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 6 01:41:39.829166 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 6 01:41:39.888203 containerd[1476]: time="2026-03-06T01:41:39.887928054Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 6 01:41:39.889601 containerd[1476]: time="2026-03-06T01:41:39.888763517Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 6 01:41:39.889601 containerd[1476]: time="2026-03-06T01:41:39.888960544Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 6 01:41:39.889601 containerd[1476]: time="2026-03-06T01:41:39.889414376Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 6 01:41:39.889601 containerd[1476]: time="2026-03-06T01:41:39.889434713Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 6 01:41:39.890103 containerd[1476]: time="2026-03-06T01:41:39.890052119Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 6 01:41:39.890164 containerd[1476]: time="2026-03-06T01:41:39.890151330Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 6 01:41:39.890938 containerd[1476]: time="2026-03-06T01:41:39.890911635Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 6 01:41:39.890996 containerd[1476]: time="2026-03-06T01:41:39.890983483Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 6 01:41:39.891046 containerd[1476]: time="2026-03-06T01:41:39.891032885Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 6 01:41:39.891087 containerd[1476]: time="2026-03-06T01:41:39.891076613Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 6 01:41:39.891396 containerd[1476]: time="2026-03-06T01:41:39.891374522Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 6 01:41:39.892106 containerd[1476]: time="2026-03-06T01:41:39.892085498Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 6 01:41:39.892430 containerd[1476]: time="2026-03-06T01:41:39.892403793Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 6 01:41:39.892531 containerd[1476]: time="2026-03-06T01:41:39.892515510Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 6 01:41:39.892778 containerd[1476]: time="2026-03-06T01:41:39.892760391Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 6 01:41:39.892934 containerd[1476]: time="2026-03-06T01:41:39.892916754Z" level=info msg="metadata content store policy set" policy=shared Mar 6 01:41:39.900650 containerd[1476]: time="2026-03-06T01:41:39.900619170Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 6 01:41:39.901156 containerd[1476]: time="2026-03-06T01:41:39.901067768Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 6 01:41:39.901156 containerd[1476]: time="2026-03-06T01:41:39.901092210Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 6 01:41:39.903586 containerd[1476]: time="2026-03-06T01:41:39.901263522Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 6 01:41:39.903586 containerd[1476]: time="2026-03-06T01:41:39.901306936Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 6 01:41:39.903586 containerd[1476]: time="2026-03-06T01:41:39.901624346Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 6 01:41:39.903586 containerd[1476]: time="2026-03-06T01:41:39.901971972Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 6 01:41:39.903586 containerd[1476]: time="2026-03-06T01:41:39.902181718Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 6 01:41:39.903586 containerd[1476]: time="2026-03-06T01:41:39.902238860Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 6 01:41:39.903586 containerd[1476]: time="2026-03-06T01:41:39.902253036Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 6 01:41:39.903586 containerd[1476]: time="2026-03-06T01:41:39.902265949Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 6 01:41:39.903586 containerd[1476]: time="2026-03-06T01:41:39.902277161Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 6 01:41:39.903586 containerd[1476]: time="2026-03-06T01:41:39.902325321Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 6 01:41:39.903586 containerd[1476]: time="2026-03-06T01:41:39.902339904Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 6 01:41:39.903586 containerd[1476]: time="2026-03-06T01:41:39.902415489Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 6 01:41:39.903586 containerd[1476]: time="2026-03-06T01:41:39.902432659Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 6 01:41:39.903586 containerd[1476]: time="2026-03-06T01:41:39.902443677Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 6 01:41:39.903948 containerd[1476]: time="2026-03-06T01:41:39.902453678Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 6 01:41:39.903948 containerd[1476]: time="2026-03-06T01:41:39.902578145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 6 01:41:39.903948 containerd[1476]: time="2026-03-06T01:41:39.902593757Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 6 01:41:39.903948 containerd[1476]: time="2026-03-06T01:41:39.902605805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 6 01:41:39.903948 containerd[1476]: time="2026-03-06T01:41:39.902617149Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 6 01:41:39.903948 containerd[1476]: time="2026-03-06T01:41:39.902628494Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 6 01:41:39.903948 containerd[1476]: time="2026-03-06T01:41:39.902640654Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 6 01:41:39.903948 containerd[1476]: time="2026-03-06T01:41:39.902651489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 6 01:41:39.903948 containerd[1476]: time="2026-03-06T01:41:39.902682234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 6 01:41:39.903948 containerd[1476]: time="2026-03-06T01:41:39.902830776Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 6 01:41:39.903948 containerd[1476]: time="2026-03-06T01:41:39.902890667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 6 01:41:39.903948 containerd[1476]: time="2026-03-06T01:41:39.902903010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 6 01:41:39.903948 containerd[1476]: time="2026-03-06T01:41:39.902914120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 6 01:41:39.903948 containerd[1476]: time="2026-03-06T01:41:39.902925058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 6 01:41:39.903948 containerd[1476]: time="2026-03-06T01:41:39.902940059Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 6 01:41:39.904460 containerd[1476]: time="2026-03-06T01:41:39.903018791Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 6 01:41:39.904460 containerd[1476]: time="2026-03-06T01:41:39.903030767Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 6 01:41:39.904460 containerd[1476]: time="2026-03-06T01:41:39.903056715Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 6 01:41:39.904460 containerd[1476]: time="2026-03-06T01:41:39.903148105Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 6 01:41:39.904460 containerd[1476]: time="2026-03-06T01:41:39.903166527Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 6 01:41:39.904460 containerd[1476]: time="2026-03-06T01:41:39.903176590Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 6 01:41:39.904460 containerd[1476]: time="2026-03-06T01:41:39.903187394Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 6 01:41:39.904460 containerd[1476]: time="2026-03-06T01:41:39.903196253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 6 01:41:39.904460 containerd[1476]: time="2026-03-06T01:41:39.903206774Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 6 01:41:39.904460 containerd[1476]: time="2026-03-06T01:41:39.903246664Z" level=info msg="NRI interface is disabled by configuration." Mar 6 01:41:39.904460 containerd[1476]: time="2026-03-06T01:41:39.903256908Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 6 01:41:39.906001 containerd[1476]: time="2026-03-06T01:41:39.905911186Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 6 01:41:39.907723 containerd[1476]: time="2026-03-06T01:41:39.907698645Z" level=info msg="Connect containerd service" Mar 6 01:41:39.907993 containerd[1476]: time="2026-03-06T01:41:39.907930043Z" level=info msg="using legacy CRI server" Mar 6 01:41:39.908067 containerd[1476]: time="2026-03-06T01:41:39.908048562Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 6 01:41:39.908770 containerd[1476]: time="2026-03-06T01:41:39.908744527Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 6 01:41:39.910793 containerd[1476]: time="2026-03-06T01:41:39.910759757Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 6 01:41:39.912355 containerd[1476]: time="2026-03-06T01:41:39.911162498Z" level=info msg="Start subscribing containerd event" Mar 6 01:41:39.912355 containerd[1476]: time="2026-03-06T01:41:39.911384159Z" level=info msg="Start recovering state" Mar 6 01:41:39.912355 containerd[1476]: time="2026-03-06T01:41:39.911780331Z" level=info msg="Start event monitor" Mar 6 01:41:39.912355 containerd[1476]: time="2026-03-06T01:41:39.911873433Z" level=info msg="Start snapshots syncer" Mar 6 01:41:39.912355 containerd[1476]: time="2026-03-06T01:41:39.911909534Z" level=info msg="Start cni network conf syncer for default" Mar 6 01:41:39.912355 containerd[1476]: time="2026-03-06T01:41:39.911921663Z" level=info msg="Start streaming server" Mar 6 01:41:39.913635 tar[1469]: linux-amd64/README.md Mar 6 01:41:39.914263 containerd[1476]: time="2026-03-06T01:41:39.913697716Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 6 01:41:39.914263 containerd[1476]: time="2026-03-06T01:41:39.913967801Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 6 01:41:39.920645 containerd[1476]: time="2026-03-06T01:41:39.919294305Z" level=info msg="containerd successfully booted in 0.144009s" Mar 6 01:41:39.940177 systemd[1]: Started containerd.service - containerd container runtime. Mar 6 01:41:39.946197 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 6 01:41:40.451141 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 6 01:41:40.468007 systemd[1]: Started sshd@0-10.0.0.92:22-10.0.0.1:33294.service - OpenSSH per-connection server daemon (10.0.0.1:33294). Mar 6 01:41:40.545670 sshd[1550]: Accepted publickey for core from 10.0.0.1 port 33294 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:41:40.548997 sshd[1550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:41:40.568208 systemd-logind[1456]: New session 1 of user core. Mar 6 01:41:40.569617 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 6 01:41:40.581843 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 6 01:41:40.631026 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 6 01:41:40.678658 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 6 01:41:40.688964 (systemd)[1554]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 6 01:41:40.814439 systemd[1554]: Queued start job for default target default.target. Mar 6 01:41:40.833016 systemd[1554]: Created slice app.slice - User Application Slice. Mar 6 01:41:40.833083 systemd[1554]: Reached target paths.target - Paths. Mar 6 01:41:40.833097 systemd[1554]: Reached target timers.target - Timers. Mar 6 01:41:40.835142 systemd[1554]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 6 01:41:40.855970 systemd[1554]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 6 01:41:40.856099 systemd[1554]: Reached target sockets.target - Sockets. Mar 6 01:41:40.856115 systemd[1554]: Reached target basic.target - Basic System. Mar 6 01:41:40.856245 systemd[1554]: Reached target default.target - Main User Target. Mar 6 01:41:40.856300 systemd[1554]: Startup finished in 153ms. Mar 6 01:41:40.856363 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 6 01:41:40.868724 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 6 01:41:41.094898 systemd[1]: Started sshd@1-10.0.0.92:22-10.0.0.1:33306.service - OpenSSH per-connection server daemon (10.0.0.1:33306). Mar 6 01:41:41.136644 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 33306 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:41:41.139357 sshd[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:41:41.146313 systemd-logind[1456]: New session 2 of user core. Mar 6 01:41:41.155778 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 6 01:41:41.306949 sshd[1565]: pam_unix(sshd:session): session closed for user core Mar 6 01:41:41.348668 systemd[1]: sshd@1-10.0.0.92:22-10.0.0.1:33306.service: Deactivated successfully. Mar 6 01:41:41.352018 systemd[1]: session-2.scope: Deactivated successfully. Mar 6 01:41:41.355500 systemd-logind[1456]: Session 2 logged out. Waiting for processes to exit. Mar 6 01:41:41.371776 systemd[1]: Started sshd@2-10.0.0.92:22-10.0.0.1:33314.service - OpenSSH per-connection server daemon (10.0.0.1:33314). Mar 6 01:41:41.387227 systemd-logind[1456]: Removed session 2. Mar 6 01:41:41.508485 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 33314 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:41:41.512219 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:41:41.520286 systemd-logind[1456]: New session 3 of user core. Mar 6 01:41:41.529688 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 6 01:41:41.598745 sshd[1572]: pam_unix(sshd:session): session closed for user core Mar 6 01:41:41.603409 systemd[1]: sshd@2-10.0.0.92:22-10.0.0.1:33314.service: Deactivated successfully. Mar 6 01:41:41.605796 systemd[1]: session-3.scope: Deactivated successfully. Mar 6 01:41:41.607473 systemd-logind[1456]: Session 3 logged out. Waiting for processes to exit. Mar 6 01:41:41.609067 systemd-logind[1456]: Removed session 3. Mar 6 01:41:42.381058 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:41:42.386681 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 6 01:41:42.390955 systemd[1]: Startup finished in 1.384s (kernel) + 6.517s (initrd) + 9.130s (userspace) = 17.033s. Mar 6 01:41:42.398262 (kubelet)[1583]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 01:41:43.112440 kubelet[1583]: E0306 01:41:43.112288 1583 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 01:41:43.121135 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 01:41:43.121656 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 01:41:43.122937 systemd[1]: kubelet.service: Consumed 2.912s CPU time. Mar 6 01:41:51.760852 systemd[1]: Started sshd@3-10.0.0.92:22-10.0.0.1:58846.service - OpenSSH per-connection server daemon (10.0.0.1:58846). Mar 6 01:41:51.989141 sshd[1597]: Accepted publickey for core from 10.0.0.1 port 58846 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:41:51.997728 sshd[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:41:52.062426 systemd-logind[1456]: New session 4 of user core. Mar 6 01:41:52.114995 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 6 01:41:52.252277 sshd[1597]: pam_unix(sshd:session): session closed for user core Mar 6 01:41:52.282169 systemd[1]: sshd@3-10.0.0.92:22-10.0.0.1:58846.service: Deactivated successfully. Mar 6 01:41:52.285980 systemd[1]: session-4.scope: Deactivated successfully. Mar 6 01:41:52.289891 systemd-logind[1456]: Session 4 logged out. Waiting for processes to exit. Mar 6 01:41:52.298044 systemd[1]: Started sshd@4-10.0.0.92:22-10.0.0.1:58848.service - OpenSSH per-connection server daemon (10.0.0.1:58848). Mar 6 01:41:52.300985 systemd-logind[1456]: Removed session 4. Mar 6 01:41:52.364016 sshd[1604]: Accepted publickey for core from 10.0.0.1 port 58848 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:41:52.367483 sshd[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:41:52.384189 systemd-logind[1456]: New session 5 of user core. Mar 6 01:41:52.394844 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 6 01:41:52.478630 sshd[1604]: pam_unix(sshd:session): session closed for user core Mar 6 01:41:52.503940 systemd[1]: sshd@4-10.0.0.92:22-10.0.0.1:58848.service: Deactivated successfully. Mar 6 01:41:52.507840 systemd[1]: session-5.scope: Deactivated successfully. Mar 6 01:41:52.514375 systemd-logind[1456]: Session 5 logged out. Waiting for processes to exit. Mar 6 01:41:52.524586 systemd[1]: Started sshd@5-10.0.0.92:22-10.0.0.1:58862.service - OpenSSH per-connection server daemon (10.0.0.1:58862). Mar 6 01:41:52.528124 systemd-logind[1456]: Removed session 5. Mar 6 01:41:52.591239 sshd[1611]: Accepted publickey for core from 10.0.0.1 port 58862 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:41:52.594619 sshd[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:41:52.610295 systemd-logind[1456]: New session 6 of user core. Mar 6 01:41:52.623925 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 6 01:41:52.706876 sshd[1611]: pam_unix(sshd:session): session closed for user core Mar 6 01:41:52.725176 systemd[1]: sshd@5-10.0.0.92:22-10.0.0.1:58862.service: Deactivated successfully. Mar 6 01:41:52.731075 systemd[1]: session-6.scope: Deactivated successfully. Mar 6 01:41:52.734165 systemd-logind[1456]: Session 6 logged out. Waiting for processes to exit. Mar 6 01:41:52.754412 systemd[1]: Started sshd@6-10.0.0.92:22-10.0.0.1:58870.service - OpenSSH per-connection server daemon (10.0.0.1:58870). Mar 6 01:41:52.757853 systemd-logind[1456]: Removed session 6. Mar 6 01:41:52.814631 sshd[1618]: Accepted publickey for core from 10.0.0.1 port 58870 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:41:52.820250 sshd[1618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:41:52.836586 systemd-logind[1456]: New session 7 of user core. Mar 6 01:41:52.843767 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 6 01:41:52.934752 sudo[1621]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 6 01:41:52.935647 sudo[1621]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 01:41:52.964478 sudo[1621]: pam_unix(sudo:session): session closed for user root Mar 6 01:41:52.968786 sshd[1618]: pam_unix(sshd:session): session closed for user core Mar 6 01:41:52.989263 systemd[1]: sshd@6-10.0.0.92:22-10.0.0.1:58870.service: Deactivated successfully. Mar 6 01:41:52.997505 systemd[1]: session-7.scope: Deactivated successfully. Mar 6 01:41:53.008664 systemd-logind[1456]: Session 7 logged out. Waiting for processes to exit. Mar 6 01:41:53.033048 systemd[1]: Started sshd@7-10.0.0.92:22-10.0.0.1:58878.service - OpenSSH per-connection server daemon (10.0.0.1:58878). Mar 6 01:41:53.059021 systemd-logind[1456]: Removed session 7. Mar 6 01:41:53.137194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 6 01:41:53.148837 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:41:53.163307 sshd[1626]: Accepted publickey for core from 10.0.0.1 port 58878 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:41:53.166126 sshd[1626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:41:53.208260 systemd-logind[1456]: New session 8 of user core. Mar 6 01:41:53.210303 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 6 01:41:53.291745 sudo[1633]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 6 01:41:53.292839 sudo[1633]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 01:41:53.305333 sudo[1633]: pam_unix(sudo:session): session closed for user root Mar 6 01:41:53.327790 sudo[1632]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 6 01:41:53.328381 sudo[1632]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 01:41:53.370978 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 6 01:41:53.380920 auditctl[1636]: No rules Mar 6 01:41:53.384189 systemd[1]: audit-rules.service: Deactivated successfully. Mar 6 01:41:53.385702 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 6 01:41:53.401791 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 6 01:41:53.478905 augenrules[1654]: No rules Mar 6 01:41:53.480241 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 6 01:41:53.481628 sudo[1632]: pam_unix(sudo:session): session closed for user root Mar 6 01:41:53.484114 sshd[1626]: pam_unix(sshd:session): session closed for user core Mar 6 01:41:53.492788 systemd[1]: sshd@7-10.0.0.92:22-10.0.0.1:58878.service: Deactivated successfully. Mar 6 01:41:53.495148 systemd[1]: session-8.scope: Deactivated successfully. Mar 6 01:41:53.498839 systemd-logind[1456]: Session 8 logged out. Waiting for processes to exit. Mar 6 01:41:53.509146 systemd[1]: Started sshd@8-10.0.0.92:22-10.0.0.1:58892.service - OpenSSH per-connection server daemon (10.0.0.1:58892). Mar 6 01:41:53.512689 systemd-logind[1456]: Removed session 8. Mar 6 01:41:53.564071 sshd[1662]: Accepted publickey for core from 10.0.0.1 port 58892 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:41:53.566418 sshd[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:41:53.581688 systemd-logind[1456]: New session 9 of user core. Mar 6 01:41:53.602295 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 6 01:41:53.667991 sudo[1665]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 6 01:41:53.668472 sudo[1665]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 01:41:54.072810 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:41:54.073165 (kubelet)[1680]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 01:41:54.203162 kubelet[1680]: E0306 01:41:54.202998 1680 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 01:41:54.208419 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 01:41:54.208765 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 01:41:56.129682 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 6 01:41:56.132814 (dockerd)[1697]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 6 01:41:58.107199 dockerd[1697]: time="2026-03-06T01:41:58.106945282Z" level=info msg="Starting up" Mar 6 01:41:58.471324 dockerd[1697]: time="2026-03-06T01:41:58.471107615Z" level=info msg="Loading containers: start." Mar 6 01:41:58.930414 kernel: Initializing XFRM netlink socket Mar 6 01:41:59.054941 systemd-networkd[1413]: docker0: Link UP Mar 6 01:41:59.085374 dockerd[1697]: time="2026-03-06T01:41:59.085257991Z" level=info msg="Loading containers: done." Mar 6 01:41:59.132460 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1507844766-merged.mount: Deactivated successfully. Mar 6 01:41:59.137961 dockerd[1697]: time="2026-03-06T01:41:59.137853656Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 6 01:41:59.138389 dockerd[1697]: time="2026-03-06T01:41:59.138114128Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 6 01:41:59.138436 dockerd[1697]: time="2026-03-06T01:41:59.138421060Z" level=info msg="Daemon has completed initialization" Mar 6 01:41:59.194435 dockerd[1697]: time="2026-03-06T01:41:59.194161628Z" level=info msg="API listen on /run/docker.sock" Mar 6 01:41:59.194572 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 6 01:42:01.214682 containerd[1476]: time="2026-03-06T01:42:01.214610550Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\"" Mar 6 01:42:02.344962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3788872127.mount: Deactivated successfully. Mar 6 01:42:03.427351 containerd[1476]: time="2026-03-06T01:42:03.427249386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:42:03.428248 containerd[1476]: time="2026-03-06T01:42:03.428184769Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.2: active requests=0, bytes read=27696467" Mar 6 01:42:03.430167 containerd[1476]: time="2026-03-06T01:42:03.430102549Z" level=info msg="ImageCreate event name:\"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:42:03.434852 containerd[1476]: time="2026-03-06T01:42:03.434781362Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:42:03.436476 containerd[1476]: time="2026-03-06T01:42:03.436407572Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.2\" with image id \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\", size \"27693066\" in 2.22173697s" Mar 6 01:42:03.436476 containerd[1476]: time="2026-03-06T01:42:03.436468597Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\" returns image reference \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\"" Mar 6 01:42:03.442673 containerd[1476]: time="2026-03-06T01:42:03.442614193Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\"" Mar 6 01:42:04.269915 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 6 01:42:04.281851 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:42:04.845098 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:42:04.886998 (kubelet)[1913]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 01:42:05.411692 kubelet[1913]: E0306 01:42:05.411507 1913 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 01:42:05.416063 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 01:42:05.416275 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 01:42:05.416688 systemd[1]: kubelet.service: Consumed 1.216s CPU time. Mar 6 01:42:06.250576 containerd[1476]: time="2026-03-06T01:42:06.250436188Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:42:06.252744 containerd[1476]: time="2026-03-06T01:42:06.252599770Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.2: active requests=0, bytes read=21450700" Mar 6 01:42:06.254180 containerd[1476]: time="2026-03-06T01:42:06.254086765Z" level=info msg="ImageCreate event name:\"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:42:06.261545 containerd[1476]: time="2026-03-06T01:42:06.261450585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:42:06.263269 containerd[1476]: time="2026-03-06T01:42:06.263161438Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.2\" with image id \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\", size \"23142311\" in 2.820470424s" Mar 6 01:42:06.263382 containerd[1476]: time="2026-03-06T01:42:06.263223260Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\" returns image reference \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\"" Mar 6 01:42:06.265900 containerd[1476]: time="2026-03-06T01:42:06.265856683Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\"" Mar 6 01:42:08.680461 containerd[1476]: time="2026-03-06T01:42:08.680292980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:42:08.682325 containerd[1476]: time="2026-03-06T01:42:08.681886940Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.2: active requests=0, bytes read=15548429" Mar 6 01:42:08.684007 containerd[1476]: time="2026-03-06T01:42:08.683926275Z" level=info msg="ImageCreate event name:\"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:42:08.693033 containerd[1476]: time="2026-03-06T01:42:08.692953692Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:42:08.695336 containerd[1476]: time="2026-03-06T01:42:08.695071420Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.2\" with image id \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\", size \"17240058\" in 2.429153806s" Mar 6 01:42:08.695336 containerd[1476]: time="2026-03-06T01:42:08.695175240Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\" returns image reference \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\"" Mar 6 01:42:08.701913 containerd[1476]: time="2026-03-06T01:42:08.701796397Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\"" Mar 6 01:42:11.128694 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1224532504.mount: Deactivated successfully. Mar 6 01:42:12.422468 containerd[1476]: time="2026-03-06T01:42:12.422293149Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:42:12.423933 containerd[1476]: time="2026-03-06T01:42:12.423717208Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.2: active requests=0, bytes read=25685312" Mar 6 01:42:12.425701 containerd[1476]: time="2026-03-06T01:42:12.425466373Z" level=info msg="ImageCreate event name:\"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:42:12.437015 containerd[1476]: time="2026-03-06T01:42:12.436900772Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:42:12.437967 containerd[1476]: time="2026-03-06T01:42:12.437827228Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.2\" with image id \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\", repo tag \"registry.k8s.io/kube-proxy:v1.35.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\", size \"25684331\" in 3.735916299s" Mar 6 01:42:12.437967 containerd[1476]: time="2026-03-06T01:42:12.437906390Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\" returns image reference \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\"" Mar 6 01:42:12.440336 containerd[1476]: time="2026-03-06T01:42:12.440276345Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Mar 6 01:42:13.063887 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2854661017.mount: Deactivated successfully. Mar 6 01:42:15.497422 containerd[1476]: time="2026-03-06T01:42:15.497289582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:42:15.498899 containerd[1476]: time="2026-03-06T01:42:15.497996402Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23556542" Mar 6 01:42:15.499089 containerd[1476]: time="2026-03-06T01:42:15.499024058Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:42:15.502685 containerd[1476]: time="2026-03-06T01:42:15.502614207Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:42:15.504061 containerd[1476]: time="2026-03-06T01:42:15.503982621Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 3.063656985s" Mar 6 01:42:15.504061 containerd[1476]: time="2026-03-06T01:42:15.504029064Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Mar 6 01:42:15.506140 containerd[1476]: time="2026-03-06T01:42:15.505984208Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 6 01:42:15.517661 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 6 01:42:15.526866 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:42:15.911879 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:42:15.922480 (kubelet)[2001]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 01:42:16.322609 kubelet[2001]: E0306 01:42:16.322390 2001 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 01:42:16.327235 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 01:42:16.327512 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 01:42:16.375158 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2444063646.mount: Deactivated successfully. Mar 6 01:42:16.385963 containerd[1476]: time="2026-03-06T01:42:16.385824699Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:42:16.387092 containerd[1476]: time="2026-03-06T01:42:16.386995225Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 6 01:42:16.388495 containerd[1476]: time="2026-03-06T01:42:16.388400639Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:42:16.393403 containerd[1476]: time="2026-03-06T01:42:16.392350530Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:42:16.424591 containerd[1476]: time="2026-03-06T01:42:16.424447691Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 918.430056ms" Mar 6 01:42:16.424591 containerd[1476]: time="2026-03-06T01:42:16.424631813Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 6 01:42:16.429394 containerd[1476]: time="2026-03-06T01:42:16.429299961Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Mar 6 01:42:16.696783 kernel: hrtimer: interrupt took 5493859 ns Mar 6 01:42:17.189855 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount471112825.mount: Deactivated successfully. Mar 6 01:42:20.381907 containerd[1476]: time="2026-03-06T01:42:20.381722723Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:42:20.383130 containerd[1476]: time="2026-03-06T01:42:20.382852962Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23630322" Mar 6 01:42:20.384188 containerd[1476]: time="2026-03-06T01:42:20.384073599Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:42:20.388735 containerd[1476]: time="2026-03-06T01:42:20.388647221Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:42:20.389783 containerd[1476]: time="2026-03-06T01:42:20.389709674Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 3.960339612s" Mar 6 01:42:20.389783 containerd[1476]: time="2026-03-06T01:42:20.389759844Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Mar 6 01:42:23.217263 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:42:23.246062 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:42:23.313874 systemd[1]: Reloading requested from client PID 2106 ('systemctl') (unit session-9.scope)... Mar 6 01:42:23.313916 systemd[1]: Reloading... Mar 6 01:42:23.440741 zram_generator::config[2145]: No configuration found. Mar 6 01:42:23.677439 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 6 01:42:23.824043 systemd[1]: Reloading finished in 508 ms. Mar 6 01:42:23.991208 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 6 01:42:23.991373 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 6 01:42:23.991741 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:42:24.026968 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:42:24.389043 update_engine[1465]: I20260306 01:42:24.385094 1465 update_attempter.cc:509] Updating boot flags... Mar 6 01:42:24.465180 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:42:24.465898 (kubelet)[2196]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 6 01:42:24.646687 kubelet[2196]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 6 01:42:24.656471 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2208) Mar 6 01:42:24.723604 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2212) Mar 6 01:42:24.807651 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2212) Mar 6 01:42:24.957736 kubelet[2196]: I0306 01:42:24.955202 2196 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 6 01:42:24.957736 kubelet[2196]: I0306 01:42:24.955457 2196 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 6 01:42:24.957736 kubelet[2196]: I0306 01:42:24.956658 2196 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 6 01:42:24.957736 kubelet[2196]: I0306 01:42:24.956670 2196 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 6 01:42:24.957736 kubelet[2196]: I0306 01:42:24.957164 2196 server.go:951] "Client rotation is on, will bootstrap in background" Mar 6 01:42:24.985763 kubelet[2196]: I0306 01:42:24.985675 2196 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 6 01:42:24.986348 kubelet[2196]: E0306 01:42:24.986299 2196 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.92:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 6 01:42:25.015057 kubelet[2196]: E0306 01:42:25.010655 2196 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 6 01:42:25.015057 kubelet[2196]: I0306 01:42:25.010716 2196 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 6 01:42:25.037496 kubelet[2196]: I0306 01:42:25.036848 2196 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 6 01:42:25.042362 kubelet[2196]: I0306 01:42:25.041797 2196 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 6 01:42:25.042362 kubelet[2196]: I0306 01:42:25.042014 2196 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 6 01:42:25.043878 kubelet[2196]: I0306 01:42:25.043660 2196 topology_manager.go:143] "Creating topology manager with none policy" Mar 6 01:42:25.044415 kubelet[2196]: I0306 01:42:25.044123 2196 container_manager_linux.go:308] "Creating device plugin manager" Mar 6 01:42:25.050503 kubelet[2196]: I0306 01:42:25.050352 2196 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 6 01:42:25.063062 kubelet[2196]: I0306 01:42:25.060188 2196 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 6 01:42:25.063062 kubelet[2196]: I0306 01:42:25.060459 2196 kubelet.go:482] "Attempting to sync node with API server" Mar 6 01:42:25.063062 kubelet[2196]: I0306 01:42:25.060474 2196 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 6 01:42:25.063062 kubelet[2196]: I0306 01:42:25.060592 2196 kubelet.go:394] "Adding apiserver pod source" Mar 6 01:42:25.063062 kubelet[2196]: I0306 01:42:25.060627 2196 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 6 01:42:25.071102 kubelet[2196]: I0306 01:42:25.069899 2196 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 6 01:42:25.078872 kubelet[2196]: I0306 01:42:25.076178 2196 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 6 01:42:25.078872 kubelet[2196]: I0306 01:42:25.076238 2196 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 6 01:42:25.078872 kubelet[2196]: W0306 01:42:25.076342 2196 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 6 01:42:25.091618 kubelet[2196]: I0306 01:42:25.091501 2196 server.go:1257] "Started kubelet" Mar 6 01:42:25.093702 kubelet[2196]: I0306 01:42:25.092654 2196 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 6 01:42:25.119185 kubelet[2196]: I0306 01:42:25.118312 2196 server.go:317] "Adding debug handlers to kubelet server" Mar 6 01:42:25.125158 kubelet[2196]: I0306 01:42:25.125075 2196 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 6 01:42:25.129828 kubelet[2196]: I0306 01:42:25.129652 2196 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 6 01:42:25.129828 kubelet[2196]: I0306 01:42:25.129832 2196 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 6 01:42:25.132228 kubelet[2196]: I0306 01:42:25.132187 2196 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 6 01:42:25.133312 kubelet[2196]: I0306 01:42:25.133223 2196 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 6 01:42:25.133612 kubelet[2196]: E0306 01:42:25.133466 2196 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 01:42:25.135201 kubelet[2196]: E0306 01:42:25.132781 2196 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.92:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.92:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189a1d0d05b82868 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-06 01:42:25.091446888 +0000 UTC m=+0.593226228,LastTimestamp:2026-03-06 01:42:25.091446888 +0000 UTC m=+0.593226228,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 6 01:42:25.137245 kubelet[2196]: E0306 01:42:25.137199 2196 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="200ms" Mar 6 01:42:25.137374 kubelet[2196]: I0306 01:42:25.137263 2196 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 6 01:42:25.140098 kubelet[2196]: I0306 01:42:25.139191 2196 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 6 01:42:25.141280 kubelet[2196]: I0306 01:42:25.140240 2196 reconciler.go:29] "Reconciler: start to sync state" Mar 6 01:42:25.141280 kubelet[2196]: I0306 01:42:25.140323 2196 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 6 01:42:25.144122 kubelet[2196]: I0306 01:42:25.142741 2196 factory.go:223] Registration of the containerd container factory successfully Mar 6 01:42:25.144122 kubelet[2196]: I0306 01:42:25.142767 2196 factory.go:223] Registration of the systemd container factory successfully Mar 6 01:42:25.147089 kubelet[2196]: E0306 01:42:25.145505 2196 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 6 01:42:25.206322 kubelet[2196]: I0306 01:42:25.205801 2196 cpu_manager.go:225] "Starting" policy="none" Mar 6 01:42:25.206322 kubelet[2196]: I0306 01:42:25.205819 2196 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 6 01:42:25.206322 kubelet[2196]: I0306 01:42:25.205861 2196 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 6 01:42:25.216260 kubelet[2196]: I0306 01:42:25.216173 2196 policy_none.go:50] "Start" Mar 6 01:42:25.216260 kubelet[2196]: I0306 01:42:25.216231 2196 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 6 01:42:25.216373 kubelet[2196]: I0306 01:42:25.216301 2196 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 6 01:42:25.226913 kubelet[2196]: I0306 01:42:25.226824 2196 policy_none.go:44] "Start" Mar 6 01:42:25.239737 kubelet[2196]: E0306 01:42:25.234063 2196 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 01:42:25.239737 kubelet[2196]: I0306 01:42:25.238850 2196 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 6 01:42:25.235509 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 6 01:42:25.248602 kubelet[2196]: I0306 01:42:25.248451 2196 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 6 01:42:25.248815 kubelet[2196]: I0306 01:42:25.248740 2196 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 6 01:42:25.250591 kubelet[2196]: I0306 01:42:25.249965 2196 kubelet.go:2501] "Starting kubelet main sync loop" Mar 6 01:42:25.259268 kubelet[2196]: E0306 01:42:25.250101 2196 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 6 01:42:25.272284 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 6 01:42:25.288427 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 6 01:42:25.306479 kubelet[2196]: E0306 01:42:25.306364 2196 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 6 01:42:25.306813 kubelet[2196]: I0306 01:42:25.306714 2196 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 6 01:42:25.306813 kubelet[2196]: I0306 01:42:25.306765 2196 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 6 01:42:25.307591 kubelet[2196]: I0306 01:42:25.307453 2196 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 6 01:42:25.314819 kubelet[2196]: E0306 01:42:25.314423 2196 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 6 01:42:25.314819 kubelet[2196]: E0306 01:42:25.314472 2196 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 6 01:42:25.338897 kubelet[2196]: E0306 01:42:25.338723 2196 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="400ms" Mar 6 01:42:25.398405 systemd[1]: Created slice kubepods-burstable-podf420dd303687d038b2bc2fa1d277c55c.slice - libcontainer container kubepods-burstable-podf420dd303687d038b2bc2fa1d277c55c.slice. Mar 6 01:42:25.416244 kubelet[2196]: I0306 01:42:25.415345 2196 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 6 01:42:25.416417 kubelet[2196]: E0306 01:42:25.416390 2196 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" Mar 6 01:42:25.431190 kubelet[2196]: E0306 01:42:25.430706 2196 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:42:25.444311 systemd[1]: Created slice kubepods-burstable-podbd81bb6a14e176da833e3a8030ee5eac.slice - libcontainer container kubepods-burstable-podbd81bb6a14e176da833e3a8030ee5eac.slice. Mar 6 01:42:25.446633 kubelet[2196]: I0306 01:42:25.445636 2196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:42:25.446633 kubelet[2196]: I0306 01:42:25.445666 2196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:42:25.446633 kubelet[2196]: I0306 01:42:25.445691 2196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6f73765a2516cb0fb88ff605d5dc353b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6f73765a2516cb0fb88ff605d5dc353b\") " pod="kube-system/kube-apiserver-localhost" Mar 6 01:42:25.446633 kubelet[2196]: I0306 01:42:25.445713 2196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6f73765a2516cb0fb88ff605d5dc353b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6f73765a2516cb0fb88ff605d5dc353b\") " pod="kube-system/kube-apiserver-localhost" Mar 6 01:42:25.446633 kubelet[2196]: I0306 01:42:25.445734 2196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:42:25.446836 kubelet[2196]: I0306 01:42:25.445754 2196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:42:25.446836 kubelet[2196]: I0306 01:42:25.445835 2196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:42:25.446836 kubelet[2196]: I0306 01:42:25.445863 2196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd81bb6a14e176da833e3a8030ee5eac-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"bd81bb6a14e176da833e3a8030ee5eac\") " pod="kube-system/kube-scheduler-localhost" Mar 6 01:42:25.446836 kubelet[2196]: I0306 01:42:25.445886 2196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6f73765a2516cb0fb88ff605d5dc353b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6f73765a2516cb0fb88ff605d5dc353b\") " pod="kube-system/kube-apiserver-localhost" Mar 6 01:42:25.450575 kubelet[2196]: E0306 01:42:25.450337 2196 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:42:25.481421 systemd[1]: Created slice kubepods-burstable-pod6f73765a2516cb0fb88ff605d5dc353b.slice - libcontainer container kubepods-burstable-pod6f73765a2516cb0fb88ff605d5dc353b.slice. Mar 6 01:42:25.486447 kubelet[2196]: E0306 01:42:25.485917 2196 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:42:25.619587 kubelet[2196]: I0306 01:42:25.619423 2196 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 6 01:42:25.619990 kubelet[2196]: E0306 01:42:25.619909 2196 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" Mar 6 01:42:25.739612 kubelet[2196]: E0306 01:42:25.739031 2196 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:25.739984 kubelet[2196]: E0306 01:42:25.739904 2196 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="800ms" Mar 6 01:42:25.741123 containerd[1476]: time="2026-03-06T01:42:25.740915226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f420dd303687d038b2bc2fa1d277c55c,Namespace:kube-system,Attempt:0,}" Mar 6 01:42:25.764591 kubelet[2196]: E0306 01:42:25.763284 2196 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:25.767818 containerd[1476]: time="2026-03-06T01:42:25.767671722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:bd81bb6a14e176da833e3a8030ee5eac,Namespace:kube-system,Attempt:0,}" Mar 6 01:42:25.798850 kubelet[2196]: E0306 01:42:25.798394 2196 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:25.799428 containerd[1476]: time="2026-03-06T01:42:25.799264710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6f73765a2516cb0fb88ff605d5dc353b,Namespace:kube-system,Attempt:0,}" Mar 6 01:42:26.025279 kubelet[2196]: I0306 01:42:26.023902 2196 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 6 01:42:26.026294 kubelet[2196]: E0306 01:42:26.026210 2196 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" Mar 6 01:42:26.347001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4197022330.mount: Deactivated successfully. Mar 6 01:42:26.362854 containerd[1476]: time="2026-03-06T01:42:26.362735689Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 01:42:26.367617 containerd[1476]: time="2026-03-06T01:42:26.367456813Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 6 01:42:26.371071 containerd[1476]: time="2026-03-06T01:42:26.370639464Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 01:42:26.375284 containerd[1476]: time="2026-03-06T01:42:26.375205515Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 6 01:42:26.377351 containerd[1476]: time="2026-03-06T01:42:26.377287870Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 01:42:26.382006 containerd[1476]: time="2026-03-06T01:42:26.381939320Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 01:42:26.383910 containerd[1476]: time="2026-03-06T01:42:26.383764584Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 6 01:42:26.386484 containerd[1476]: time="2026-03-06T01:42:26.386410653Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 01:42:26.387831 containerd[1476]: time="2026-03-06T01:42:26.387761773Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 619.973491ms" Mar 6 01:42:26.395202 containerd[1476]: time="2026-03-06T01:42:26.395061325Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 653.913082ms" Mar 6 01:42:26.395360 containerd[1476]: time="2026-03-06T01:42:26.395282375Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 595.904795ms" Mar 6 01:42:26.542948 kubelet[2196]: E0306 01:42:26.542821 2196 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="1.6s" Mar 6 01:42:26.680432 containerd[1476]: time="2026-03-06T01:42:26.679941736Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:42:26.680432 containerd[1476]: time="2026-03-06T01:42:26.680173040Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:42:26.680432 containerd[1476]: time="2026-03-06T01:42:26.680297773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:42:26.681607 containerd[1476]: time="2026-03-06T01:42:26.681405521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:42:26.706179 containerd[1476]: time="2026-03-06T01:42:26.705663296Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:42:26.706179 containerd[1476]: time="2026-03-06T01:42:26.705746058Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:42:26.706179 containerd[1476]: time="2026-03-06T01:42:26.705770396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:42:26.706179 containerd[1476]: time="2026-03-06T01:42:26.705927573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:42:26.714612 containerd[1476]: time="2026-03-06T01:42:26.714269743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:42:26.714612 containerd[1476]: time="2026-03-06T01:42:26.714325903Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:42:26.714612 containerd[1476]: time="2026-03-06T01:42:26.714337064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:42:26.714612 containerd[1476]: time="2026-03-06T01:42:26.714413855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:42:26.806069 systemd[1]: Started cri-containerd-16e050d5111bd8bcbc2f022c2500c20c9ed73b8aaee509b4969b0bf0b75ec861.scope - libcontainer container 16e050d5111bd8bcbc2f022c2500c20c9ed73b8aaee509b4969b0bf0b75ec861. Mar 6 01:42:26.816616 systemd[1]: Started cri-containerd-69d047a20b18d3903033ad7a5e9a4773358f13dca21b189a8b8e0444cea423b9.scope - libcontainer container 69d047a20b18d3903033ad7a5e9a4773358f13dca21b189a8b8e0444cea423b9. Mar 6 01:42:26.835909 kubelet[2196]: I0306 01:42:26.835617 2196 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 6 01:42:26.837227 kubelet[2196]: E0306 01:42:26.837045 2196 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" Mar 6 01:42:26.889225 systemd[1]: Started cri-containerd-590b4a9a8629a36e9b0da672fe00405368e486a8d77bb327451e191175aeff0f.scope - libcontainer container 590b4a9a8629a36e9b0da672fe00405368e486a8d77bb327451e191175aeff0f. Mar 6 01:42:27.000999 containerd[1476]: time="2026-03-06T01:42:27.000478407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:bd81bb6a14e176da833e3a8030ee5eac,Namespace:kube-system,Attempt:0,} returns sandbox id \"69d047a20b18d3903033ad7a5e9a4773358f13dca21b189a8b8e0444cea423b9\"" Mar 6 01:42:27.007188 kubelet[2196]: E0306 01:42:27.006924 2196 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:27.007439 containerd[1476]: time="2026-03-06T01:42:27.007222428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6f73765a2516cb0fb88ff605d5dc353b,Namespace:kube-system,Attempt:0,} returns sandbox id \"590b4a9a8629a36e9b0da672fe00405368e486a8d77bb327451e191175aeff0f\"" Mar 6 01:42:27.014653 kubelet[2196]: E0306 01:42:27.014592 2196 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:27.021492 containerd[1476]: time="2026-03-06T01:42:27.021396242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f420dd303687d038b2bc2fa1d277c55c,Namespace:kube-system,Attempt:0,} returns sandbox id \"16e050d5111bd8bcbc2f022c2500c20c9ed73b8aaee509b4969b0bf0b75ec861\"" Mar 6 01:42:27.022797 kubelet[2196]: E0306 01:42:27.022735 2196 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:27.067070 kubelet[2196]: E0306 01:42:27.066961 2196 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.92:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 6 01:42:27.178317 containerd[1476]: time="2026-03-06T01:42:27.178215778Z" level=info msg="CreateContainer within sandbox \"69d047a20b18d3903033ad7a5e9a4773358f13dca21b189a8b8e0444cea423b9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 6 01:42:27.271319 containerd[1476]: time="2026-03-06T01:42:27.271151032Z" level=info msg="CreateContainer within sandbox \"590b4a9a8629a36e9b0da672fe00405368e486a8d77bb327451e191175aeff0f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 6 01:42:27.288370 containerd[1476]: time="2026-03-06T01:42:27.288254186Z" level=info msg="CreateContainer within sandbox \"16e050d5111bd8bcbc2f022c2500c20c9ed73b8aaee509b4969b0bf0b75ec861\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 6 01:42:27.351272 containerd[1476]: time="2026-03-06T01:42:27.351132461Z" level=info msg="CreateContainer within sandbox \"69d047a20b18d3903033ad7a5e9a4773358f13dca21b189a8b8e0444cea423b9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f17afcbf62b3fbf7a44e32fe869c96863a7a4d3270921a4b72c216ddb46183fb\"" Mar 6 01:42:27.353977 containerd[1476]: time="2026-03-06T01:42:27.353894794Z" level=info msg="StartContainer for \"f17afcbf62b3fbf7a44e32fe869c96863a7a4d3270921a4b72c216ddb46183fb\"" Mar 6 01:42:27.364241 containerd[1476]: time="2026-03-06T01:42:27.363769426Z" level=info msg="CreateContainer within sandbox \"590b4a9a8629a36e9b0da672fe00405368e486a8d77bb327451e191175aeff0f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"993a7e7d39b12aef2c54ee2c3ddf27d17953d2fec5277afe545f89f6dbb60f4e\"" Mar 6 01:42:27.366255 containerd[1476]: time="2026-03-06T01:42:27.365068450Z" level=info msg="StartContainer for \"993a7e7d39b12aef2c54ee2c3ddf27d17953d2fec5277afe545f89f6dbb60f4e\"" Mar 6 01:42:27.372851 containerd[1476]: time="2026-03-06T01:42:27.372762990Z" level=info msg="CreateContainer within sandbox \"16e050d5111bd8bcbc2f022c2500c20c9ed73b8aaee509b4969b0bf0b75ec861\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"49df3d3d2ac273dafd47e8b272323ea6a3c770b783ab99bb793ffacfccb93935\"" Mar 6 01:42:27.373986 containerd[1476]: time="2026-03-06T01:42:27.373936281Z" level=info msg="StartContainer for \"49df3d3d2ac273dafd47e8b272323ea6a3c770b783ab99bb793ffacfccb93935\"" Mar 6 01:42:27.418775 systemd[1]: Started cri-containerd-f17afcbf62b3fbf7a44e32fe869c96863a7a4d3270921a4b72c216ddb46183fb.scope - libcontainer container f17afcbf62b3fbf7a44e32fe869c96863a7a4d3270921a4b72c216ddb46183fb. Mar 6 01:42:27.424441 systemd[1]: Started cri-containerd-993a7e7d39b12aef2c54ee2c3ddf27d17953d2fec5277afe545f89f6dbb60f4e.scope - libcontainer container 993a7e7d39b12aef2c54ee2c3ddf27d17953d2fec5277afe545f89f6dbb60f4e. Mar 6 01:42:27.443858 systemd[1]: Started cri-containerd-49df3d3d2ac273dafd47e8b272323ea6a3c770b783ab99bb793ffacfccb93935.scope - libcontainer container 49df3d3d2ac273dafd47e8b272323ea6a3c770b783ab99bb793ffacfccb93935. Mar 6 01:42:27.497067 containerd[1476]: time="2026-03-06T01:42:27.496990229Z" level=info msg="StartContainer for \"f17afcbf62b3fbf7a44e32fe869c96863a7a4d3270921a4b72c216ddb46183fb\" returns successfully" Mar 6 01:42:27.504963 containerd[1476]: time="2026-03-06T01:42:27.504763233Z" level=info msg="StartContainer for \"993a7e7d39b12aef2c54ee2c3ddf27d17953d2fec5277afe545f89f6dbb60f4e\" returns successfully" Mar 6 01:42:27.509083 containerd[1476]: time="2026-03-06T01:42:27.509019937Z" level=info msg="StartContainer for \"49df3d3d2ac273dafd47e8b272323ea6a3c770b783ab99bb793ffacfccb93935\" returns successfully" Mar 6 01:42:28.374144 kubelet[2196]: E0306 01:42:28.374043 2196 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:42:28.375247 kubelet[2196]: E0306 01:42:28.374403 2196 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:28.380852 kubelet[2196]: E0306 01:42:28.380698 2196 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:42:28.381608 kubelet[2196]: E0306 01:42:28.381044 2196 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:28.381938 kubelet[2196]: E0306 01:42:28.381873 2196 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:42:28.381993 kubelet[2196]: E0306 01:42:28.381985 2196 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:28.448014 kubelet[2196]: I0306 01:42:28.447873 2196 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 6 01:42:29.249102 kubelet[2196]: E0306 01:42:29.248994 2196 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 6 01:42:29.300598 kubelet[2196]: I0306 01:42:29.299946 2196 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Mar 6 01:42:29.300598 kubelet[2196]: E0306 01:42:29.300017 2196 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 6 01:42:29.345850 kubelet[2196]: I0306 01:42:29.343988 2196 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 6 01:42:29.356767 kubelet[2196]: E0306 01:42:29.356643 2196 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 6 01:42:29.356767 kubelet[2196]: I0306 01:42:29.356762 2196 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 6 01:42:29.359173 kubelet[2196]: E0306 01:42:29.359108 2196 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 6 01:42:29.359285 kubelet[2196]: I0306 01:42:29.359159 2196 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 6 01:42:29.362625 kubelet[2196]: E0306 01:42:29.361820 2196 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 6 01:42:29.416422 kubelet[2196]: I0306 01:42:29.416192 2196 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 6 01:42:29.418847 kubelet[2196]: I0306 01:42:29.417612 2196 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 6 01:42:29.424467 kubelet[2196]: I0306 01:42:29.419438 2196 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 6 01:42:29.424467 kubelet[2196]: E0306 01:42:29.423832 2196 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 6 01:42:29.427232 kubelet[2196]: E0306 01:42:29.426920 2196 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 6 01:42:29.429496 kubelet[2196]: E0306 01:42:29.427789 2196 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 6 01:42:29.429496 kubelet[2196]: E0306 01:42:29.428773 2196 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:29.429496 kubelet[2196]: E0306 01:42:29.428959 2196 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:29.430277 kubelet[2196]: E0306 01:42:29.430036 2196 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:30.074763 kubelet[2196]: I0306 01:42:30.073754 2196 apiserver.go:52] "Watching apiserver" Mar 6 01:42:30.149624 kubelet[2196]: I0306 01:42:30.145881 2196 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 6 01:42:30.421424 kubelet[2196]: I0306 01:42:30.416089 2196 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 6 01:42:30.421424 kubelet[2196]: I0306 01:42:30.420179 2196 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 6 01:42:30.449043 kubelet[2196]: E0306 01:42:30.446138 2196 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:30.462300 kubelet[2196]: E0306 01:42:30.462263 2196 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:31.421851 kubelet[2196]: E0306 01:42:31.420813 2196 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:31.421851 kubelet[2196]: E0306 01:42:31.422005 2196 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:33.601162 systemd[1]: Reloading requested from client PID 2491 ('systemctl') (unit session-9.scope)... Mar 6 01:42:33.601201 systemd[1]: Reloading... Mar 6 01:42:33.771078 zram_generator::config[2528]: No configuration found. Mar 6 01:42:33.926700 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 6 01:42:34.035233 systemd[1]: Reloading finished in 433 ms. Mar 6 01:42:34.108172 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:42:34.132882 systemd[1]: kubelet.service: Deactivated successfully. Mar 6 01:42:34.133425 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:42:34.133509 systemd[1]: kubelet.service: Consumed 2.162s CPU time, 130.6M memory peak, 0B memory swap peak. Mar 6 01:42:34.144412 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:42:34.423825 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:42:34.447947 (kubelet)[2575]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 6 01:42:34.533631 kubelet[2575]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 6 01:42:34.557744 kubelet[2575]: I0306 01:42:34.556666 2575 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 6 01:42:34.557744 kubelet[2575]: I0306 01:42:34.557804 2575 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 6 01:42:34.557744 kubelet[2575]: I0306 01:42:34.557849 2575 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 6 01:42:34.557744 kubelet[2575]: I0306 01:42:34.557862 2575 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 6 01:42:34.558945 kubelet[2575]: I0306 01:42:34.558839 2575 server.go:951] "Client rotation is on, will bootstrap in background" Mar 6 01:42:34.577180 kubelet[2575]: I0306 01:42:34.577095 2575 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 6 01:42:34.589105 kubelet[2575]: I0306 01:42:34.586690 2575 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 6 01:42:34.927414 kubelet[2575]: E0306 01:42:34.927290 2575 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 6 01:42:34.929843 kubelet[2575]: I0306 01:42:34.928634 2575 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 6 01:42:34.938630 kubelet[2575]: I0306 01:42:34.938452 2575 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 6 01:42:34.939288 kubelet[2575]: I0306 01:42:34.939152 2575 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 6 01:42:34.939607 kubelet[2575]: I0306 01:42:34.939227 2575 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 6 01:42:34.939607 kubelet[2575]: I0306 01:42:34.939598 2575 topology_manager.go:143] "Creating topology manager with none policy" Mar 6 01:42:34.939880 kubelet[2575]: I0306 01:42:34.939615 2575 container_manager_linux.go:308] "Creating device plugin manager" Mar 6 01:42:34.939880 kubelet[2575]: I0306 01:42:34.939715 2575 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 6 01:42:34.940501 kubelet[2575]: I0306 01:42:34.940026 2575 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 6 01:42:34.940501 kubelet[2575]: I0306 01:42:34.940489 2575 kubelet.go:482] "Attempting to sync node with API server" Mar 6 01:42:34.940501 kubelet[2575]: I0306 01:42:34.940513 2575 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 6 01:42:34.940834 kubelet[2575]: I0306 01:42:34.940607 2575 kubelet.go:394] "Adding apiserver pod source" Mar 6 01:42:34.940834 kubelet[2575]: I0306 01:42:34.940620 2575 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 6 01:42:34.943810 kubelet[2575]: I0306 01:42:34.943669 2575 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 6 01:42:34.945633 kubelet[2575]: I0306 01:42:34.945083 2575 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 6 01:42:34.945633 kubelet[2575]: I0306 01:42:34.945158 2575 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 6 01:42:34.951198 kubelet[2575]: I0306 01:42:34.951073 2575 server.go:1257] "Started kubelet" Mar 6 01:42:34.964401 kubelet[2575]: I0306 01:42:34.961144 2575 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 6 01:42:34.985125 kubelet[2575]: I0306 01:42:34.966040 2575 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 6 01:42:34.985125 kubelet[2575]: I0306 01:42:34.966996 2575 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 6 01:42:34.985125 kubelet[2575]: I0306 01:42:34.967096 2575 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 6 01:42:34.985125 kubelet[2575]: I0306 01:42:34.967726 2575 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 6 01:42:34.985125 kubelet[2575]: I0306 01:42:34.985063 2575 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 6 01:42:34.986011 kubelet[2575]: I0306 01:42:34.985986 2575 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 6 01:42:34.989055 kubelet[2575]: E0306 01:42:34.988739 2575 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 01:42:34.994514 kubelet[2575]: I0306 01:42:34.994490 2575 server.go:317] "Adding debug handlers to kubelet server" Mar 6 01:42:35.001599 kubelet[2575]: I0306 01:42:34.994716 2575 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 6 01:42:35.001599 kubelet[2575]: I0306 01:42:35.000684 2575 reconciler.go:29] "Reconciler: start to sync state" Mar 6 01:42:35.012123 kubelet[2575]: I0306 01:42:35.012041 2575 factory.go:223] Registration of the containerd container factory successfully Mar 6 01:42:35.012123 kubelet[2575]: I0306 01:42:35.012097 2575 factory.go:223] Registration of the systemd container factory successfully Mar 6 01:42:35.012476 kubelet[2575]: I0306 01:42:35.012366 2575 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 6 01:42:35.051515 kubelet[2575]: I0306 01:42:35.050947 2575 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 6 01:42:35.139135 kubelet[2575]: I0306 01:42:35.138838 2575 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 6 01:42:35.143060 kubelet[2575]: I0306 01:42:35.142741 2575 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 6 01:42:35.149700 kubelet[2575]: I0306 01:42:35.148882 2575 kubelet.go:2501] "Starting kubelet main sync loop" Mar 6 01:42:35.256864 kubelet[2575]: E0306 01:42:35.246506 2575 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 6 01:42:35.336595 kubelet[2575]: I0306 01:42:35.334504 2575 cpu_manager.go:225] "Starting" policy="none" Mar 6 01:42:35.336595 kubelet[2575]: I0306 01:42:35.334613 2575 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 6 01:42:35.336595 kubelet[2575]: I0306 01:42:35.334723 2575 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 6 01:42:35.336595 kubelet[2575]: I0306 01:42:35.334953 2575 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Mar 6 01:42:35.336595 kubelet[2575]: I0306 01:42:35.334965 2575 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Mar 6 01:42:35.336595 kubelet[2575]: I0306 01:42:35.334983 2575 policy_none.go:50] "Start" Mar 6 01:42:35.336595 kubelet[2575]: I0306 01:42:35.334992 2575 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 6 01:42:35.336595 kubelet[2575]: I0306 01:42:35.335002 2575 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 6 01:42:35.336595 kubelet[2575]: I0306 01:42:35.335226 2575 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 6 01:42:35.336595 kubelet[2575]: I0306 01:42:35.335238 2575 policy_none.go:44] "Start" Mar 6 01:42:35.377899 kubelet[2575]: E0306 01:42:35.360048 2575 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 6 01:42:35.538328 kubelet[2575]: E0306 01:42:35.538233 2575 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 6 01:42:35.539728 kubelet[2575]: I0306 01:42:35.539417 2575 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 6 01:42:35.540125 kubelet[2575]: I0306 01:42:35.540005 2575 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 6 01:42:35.541962 kubelet[2575]: I0306 01:42:35.541070 2575 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 6 01:42:35.550616 kubelet[2575]: E0306 01:42:35.549047 2575 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 6 01:42:35.585323 kubelet[2575]: I0306 01:42:35.585223 2575 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 6 01:42:35.588357 kubelet[2575]: I0306 01:42:35.588335 2575 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 6 01:42:35.592640 kubelet[2575]: I0306 01:42:35.592624 2575 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 6 01:42:35.606106 kubelet[2575]: E0306 01:42:35.604504 2575 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 6 01:42:35.607644 kubelet[2575]: E0306 01:42:35.607617 2575 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 6 01:42:35.644495 kubelet[2575]: I0306 01:42:35.644421 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:42:35.644785 kubelet[2575]: I0306 01:42:35.644737 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:42:35.644926 kubelet[2575]: I0306 01:42:35.644909 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd81bb6a14e176da833e3a8030ee5eac-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"bd81bb6a14e176da833e3a8030ee5eac\") " pod="kube-system/kube-scheduler-localhost" Mar 6 01:42:35.645166 kubelet[2575]: I0306 01:42:35.645089 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6f73765a2516cb0fb88ff605d5dc353b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6f73765a2516cb0fb88ff605d5dc353b\") " pod="kube-system/kube-apiserver-localhost" Mar 6 01:42:35.645166 kubelet[2575]: I0306 01:42:35.645157 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6f73765a2516cb0fb88ff605d5dc353b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6f73765a2516cb0fb88ff605d5dc353b\") " pod="kube-system/kube-apiserver-localhost" Mar 6 01:42:35.645293 kubelet[2575]: I0306 01:42:35.645193 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:42:35.645293 kubelet[2575]: I0306 01:42:35.645216 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:42:35.645293 kubelet[2575]: I0306 01:42:35.645243 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6f73765a2516cb0fb88ff605d5dc353b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6f73765a2516cb0fb88ff605d5dc353b\") " pod="kube-system/kube-apiserver-localhost" Mar 6 01:42:35.645293 kubelet[2575]: I0306 01:42:35.645273 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:42:35.702586 kubelet[2575]: I0306 01:42:35.698930 2575 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 6 01:42:35.895119 kubelet[2575]: I0306 01:42:35.894652 2575 kubelet_node_status.go:123] "Node was previously registered" node="localhost" Mar 6 01:42:35.895119 kubelet[2575]: I0306 01:42:35.895055 2575 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Mar 6 01:42:35.902217 kubelet[2575]: E0306 01:42:35.902146 2575 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:35.909216 kubelet[2575]: E0306 01:42:35.908957 2575 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:35.909216 kubelet[2575]: E0306 01:42:35.909065 2575 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:35.942489 kubelet[2575]: I0306 01:42:35.942459 2575 apiserver.go:52] "Watching apiserver" Mar 6 01:42:36.001020 kubelet[2575]: I0306 01:42:36.000937 2575 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 6 01:42:36.280674 kubelet[2575]: I0306 01:42:36.278946 2575 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=6.278843744 podStartE2EDuration="6.278843744s" podCreationTimestamp="2026-03-06 01:42:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 01:42:36.246113676 +0000 UTC m=+1.782717711" watchObservedRunningTime="2026-03-06 01:42:36.278843744 +0000 UTC m=+1.815447778" Mar 6 01:42:36.280674 kubelet[2575]: I0306 01:42:36.279270 2575 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.279265164 podStartE2EDuration="1.279265164s" podCreationTimestamp="2026-03-06 01:42:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 01:42:36.276365192 +0000 UTC m=+1.812969228" watchObservedRunningTime="2026-03-06 01:42:36.279265164 +0000 UTC m=+1.815869200" Mar 6 01:42:36.318500 kubelet[2575]: E0306 01:42:36.318403 2575 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:36.326381 kubelet[2575]: E0306 01:42:36.326112 2575 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:36.326381 kubelet[2575]: E0306 01:42:36.326271 2575 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:36.336577 kubelet[2575]: I0306 01:42:36.336413 2575 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=6.336371875 podStartE2EDuration="6.336371875s" podCreationTimestamp="2026-03-06 01:42:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 01:42:36.299362447 +0000 UTC m=+1.835966512" watchObservedRunningTime="2026-03-06 01:42:36.336371875 +0000 UTC m=+1.872975911" Mar 6 01:42:37.327034 kubelet[2575]: E0306 01:42:37.326692 2575 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:37.327034 kubelet[2575]: E0306 01:42:37.326736 2575 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:37.327034 kubelet[2575]: E0306 01:42:37.326890 2575 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:38.349934 kubelet[2575]: E0306 01:42:38.349813 2575 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:39.341833 kubelet[2575]: I0306 01:42:39.341757 2575 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 6 01:42:39.342794 containerd[1476]: time="2026-03-06T01:42:39.342741568Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 6 01:42:39.343445 kubelet[2575]: I0306 01:42:39.343027 2575 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 6 01:42:39.356269 kubelet[2575]: E0306 01:42:39.356163 2575 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:40.725912 systemd[1]: Created slice kubepods-besteffort-pod52cd20d8_be8b_4b66_88c8_45a90f4ee39d.slice - libcontainer container kubepods-besteffort-pod52cd20d8_be8b_4b66_88c8_45a90f4ee39d.slice. Mar 6 01:42:40.792335 kubelet[2575]: I0306 01:42:40.792191 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/52cd20d8-be8b-4b66-88c8-45a90f4ee39d-kube-proxy\") pod \"kube-proxy-n9p5f\" (UID: \"52cd20d8-be8b-4b66-88c8-45a90f4ee39d\") " pod="kube-system/kube-proxy-n9p5f" Mar 6 01:42:40.792335 kubelet[2575]: I0306 01:42:40.792323 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/52cd20d8-be8b-4b66-88c8-45a90f4ee39d-xtables-lock\") pod \"kube-proxy-n9p5f\" (UID: \"52cd20d8-be8b-4b66-88c8-45a90f4ee39d\") " pod="kube-system/kube-proxy-n9p5f" Mar 6 01:42:40.792335 kubelet[2575]: I0306 01:42:40.792362 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/52cd20d8-be8b-4b66-88c8-45a90f4ee39d-lib-modules\") pod \"kube-proxy-n9p5f\" (UID: \"52cd20d8-be8b-4b66-88c8-45a90f4ee39d\") " pod="kube-system/kube-proxy-n9p5f" Mar 6 01:42:40.792335 kubelet[2575]: I0306 01:42:40.792389 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkn7p\" (UniqueName: \"kubernetes.io/projected/52cd20d8-be8b-4b66-88c8-45a90f4ee39d-kube-api-access-gkn7p\") pod \"kube-proxy-n9p5f\" (UID: \"52cd20d8-be8b-4b66-88c8-45a90f4ee39d\") " pod="kube-system/kube-proxy-n9p5f" Mar 6 01:42:40.913432 kubelet[2575]: E0306 01:42:40.912896 2575 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:41.113006 systemd[1]: Created slice kubepods-besteffort-pod35c1fb1a_bdb8_410b_ab11_e219ce841f97.slice - libcontainer container kubepods-besteffort-pod35c1fb1a_bdb8_410b_ab11_e219ce841f97.slice. Mar 6 01:42:41.124713 kubelet[2575]: E0306 01:42:41.124631 2575 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:41.149576 kubelet[2575]: I0306 01:42:41.149216 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwz97\" (UniqueName: \"kubernetes.io/projected/35c1fb1a-bdb8-410b-ab11-e219ce841f97-kube-api-access-bwz97\") pod \"tigera-operator-6cf4cccc57-ddwlb\" (UID: \"35c1fb1a-bdb8-410b-ab11-e219ce841f97\") " pod="tigera-operator/tigera-operator-6cf4cccc57-ddwlb" Mar 6 01:42:41.149576 kubelet[2575]: I0306 01:42:41.149334 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/35c1fb1a-bdb8-410b-ab11-e219ce841f97-var-lib-calico\") pod \"tigera-operator-6cf4cccc57-ddwlb\" (UID: \"35c1fb1a-bdb8-410b-ab11-e219ce841f97\") " pod="tigera-operator/tigera-operator-6cf4cccc57-ddwlb" Mar 6 01:42:41.218035 containerd[1476]: time="2026-03-06T01:42:41.217967205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n9p5f,Uid:52cd20d8-be8b-4b66-88c8-45a90f4ee39d,Namespace:kube-system,Attempt:0,}" Mar 6 01:42:41.514770 containerd[1476]: time="2026-03-06T01:42:41.513729554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-ddwlb,Uid:35c1fb1a-bdb8-410b-ab11-e219ce841f97,Namespace:tigera-operator,Attempt:0,}" Mar 6 01:42:41.638610 containerd[1476]: time="2026-03-06T01:42:41.637415509Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:42:41.638610 containerd[1476]: time="2026-03-06T01:42:41.637758455Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:42:41.638610 containerd[1476]: time="2026-03-06T01:42:41.637830003Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:42:41.638610 containerd[1476]: time="2026-03-06T01:42:41.638051556Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:42:41.706365 containerd[1476]: time="2026-03-06T01:42:41.697970114Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:42:41.706365 containerd[1476]: time="2026-03-06T01:42:41.698223850Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:42:41.706365 containerd[1476]: time="2026-03-06T01:42:41.698252776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:42:41.715084 containerd[1476]: time="2026-03-06T01:42:41.714015863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:42:41.990801 systemd[1]: Started cri-containerd-faa5d9a808da44df8f1ae32ee967f8067d91ac24e0c8d5a3c75476f3ac82ebf3.scope - libcontainer container faa5d9a808da44df8f1ae32ee967f8067d91ac24e0c8d5a3c75476f3ac82ebf3. Mar 6 01:42:42.068071 systemd[1]: Started cri-containerd-2bb842f2afa483566f67ca6e7b86598304cdbfdc8c25e601dc6a31ca3c4299bf.scope - libcontainer container 2bb842f2afa483566f67ca6e7b86598304cdbfdc8c25e601dc6a31ca3c4299bf. Mar 6 01:42:42.120213 containerd[1476]: time="2026-03-06T01:42:42.120013535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-ddwlb,Uid:35c1fb1a-bdb8-410b-ab11-e219ce841f97,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"faa5d9a808da44df8f1ae32ee967f8067d91ac24e0c8d5a3c75476f3ac82ebf3\"" Mar 6 01:42:42.124455 containerd[1476]: time="2026-03-06T01:42:42.124282650Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 6 01:42:42.146801 containerd[1476]: time="2026-03-06T01:42:42.146664332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n9p5f,Uid:52cd20d8-be8b-4b66-88c8-45a90f4ee39d,Namespace:kube-system,Attempt:0,} returns sandbox id \"2bb842f2afa483566f67ca6e7b86598304cdbfdc8c25e601dc6a31ca3c4299bf\"" Mar 6 01:42:42.149095 kubelet[2575]: E0306 01:42:42.148977 2575 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:42.158870 containerd[1476]: time="2026-03-06T01:42:42.158740487Z" level=info msg="CreateContainer within sandbox \"2bb842f2afa483566f67ca6e7b86598304cdbfdc8c25e601dc6a31ca3c4299bf\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 6 01:42:42.210257 containerd[1476]: time="2026-03-06T01:42:42.210026143Z" level=info msg="CreateContainer within sandbox \"2bb842f2afa483566f67ca6e7b86598304cdbfdc8c25e601dc6a31ca3c4299bf\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b7506097145a9ee0fed89b7184a53eea34582464e12b800c7cf4668a3a2c18d5\"" Mar 6 01:42:42.212468 containerd[1476]: time="2026-03-06T01:42:42.211317915Z" level=info msg="StartContainer for \"b7506097145a9ee0fed89b7184a53eea34582464e12b800c7cf4668a3a2c18d5\"" Mar 6 01:42:42.270990 systemd[1]: Started cri-containerd-b7506097145a9ee0fed89b7184a53eea34582464e12b800c7cf4668a3a2c18d5.scope - libcontainer container b7506097145a9ee0fed89b7184a53eea34582464e12b800c7cf4668a3a2c18d5. Mar 6 01:42:42.335890 containerd[1476]: time="2026-03-06T01:42:42.335807572Z" level=info msg="StartContainer for \"b7506097145a9ee0fed89b7184a53eea34582464e12b800c7cf4668a3a2c18d5\" returns successfully" Mar 6 01:42:42.463885 kubelet[2575]: E0306 01:42:42.463805 2575 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:43.821888 containerd[1476]: time="2026-03-06T01:42:43.821821259Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:42:43.825106 containerd[1476]: time="2026-03-06T01:42:43.824970721Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 6 01:42:43.826983 containerd[1476]: time="2026-03-06T01:42:43.826859016Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:42:43.830957 containerd[1476]: time="2026-03-06T01:42:43.830827982Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:42:43.832477 containerd[1476]: time="2026-03-06T01:42:43.832381945Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 1.708029973s" Mar 6 01:42:43.832477 containerd[1476]: time="2026-03-06T01:42:43.832462450Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 6 01:42:43.841659 containerd[1476]: time="2026-03-06T01:42:43.841407339Z" level=info msg="CreateContainer within sandbox \"faa5d9a808da44df8f1ae32ee967f8067d91ac24e0c8d5a3c75476f3ac82ebf3\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 6 01:42:43.861438 containerd[1476]: time="2026-03-06T01:42:43.861339119Z" level=info msg="CreateContainer within sandbox \"faa5d9a808da44df8f1ae32ee967f8067d91ac24e0c8d5a3c75476f3ac82ebf3\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"df45e336eaee44edc877fbe1df9b8ba42cf0b06aa56f565954069f376da11333\"" Mar 6 01:42:43.863984 containerd[1476]: time="2026-03-06T01:42:43.862477723Z" level=info msg="StartContainer for \"df45e336eaee44edc877fbe1df9b8ba42cf0b06aa56f565954069f376da11333\"" Mar 6 01:42:43.917948 systemd[1]: Started cri-containerd-df45e336eaee44edc877fbe1df9b8ba42cf0b06aa56f565954069f376da11333.scope - libcontainer container df45e336eaee44edc877fbe1df9b8ba42cf0b06aa56f565954069f376da11333. Mar 6 01:42:43.963564 containerd[1476]: time="2026-03-06T01:42:43.963430770Z" level=info msg="StartContainer for \"df45e336eaee44edc877fbe1df9b8ba42cf0b06aa56f565954069f376da11333\" returns successfully" Mar 6 01:42:44.480663 kubelet[2575]: I0306 01:42:44.480516 2575 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-n9p5f" podStartSLOduration=4.480501462 podStartE2EDuration="4.480501462s" podCreationTimestamp="2026-03-06 01:42:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 01:42:42.483699147 +0000 UTC m=+8.020303222" watchObservedRunningTime="2026-03-06 01:42:44.480501462 +0000 UTC m=+10.017105497" Mar 6 01:42:44.481197 kubelet[2575]: I0306 01:42:44.480687 2575 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6cf4cccc57-ddwlb" podStartSLOduration=2.770587518 podStartE2EDuration="4.480681025s" podCreationTimestamp="2026-03-06 01:42:40 +0000 UTC" firstStartedPulling="2026-03-06 01:42:42.123780533 +0000 UTC m=+7.660384568" lastFinishedPulling="2026-03-06 01:42:43.83387403 +0000 UTC m=+9.370478075" observedRunningTime="2026-03-06 01:42:44.480072406 +0000 UTC m=+10.016676441" watchObservedRunningTime="2026-03-06 01:42:44.480681025 +0000 UTC m=+10.017285060" Mar 6 01:42:46.228260 kubelet[2575]: E0306 01:42:46.228180 2575 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:48.493851 kubelet[2575]: E0306 01:42:48.493811 2575 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:50.048801 sudo[1665]: pam_unix(sudo:session): session closed for user root Mar 6 01:42:50.055956 sshd[1662]: pam_unix(sshd:session): session closed for user core Mar 6 01:42:50.062011 systemd-logind[1456]: Session 9 logged out. Waiting for processes to exit. Mar 6 01:42:50.063219 systemd[1]: sshd@8-10.0.0.92:22-10.0.0.1:58892.service: Deactivated successfully. Mar 6 01:42:50.066986 systemd[1]: session-9.scope: Deactivated successfully. Mar 6 01:42:50.067764 systemd[1]: session-9.scope: Consumed 12.191s CPU time, 161.9M memory peak, 0B memory swap peak. Mar 6 01:42:50.071043 systemd-logind[1456]: Removed session 9. Mar 6 01:42:50.920068 kubelet[2575]: E0306 01:42:50.919962 2575 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:52.833209 kubelet[2575]: I0306 01:42:52.827811 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b02dc71-0552-4bed-812f-b3b42e4e6e79-tigera-ca-bundle\") pod \"calico-typha-7694b7b655-gjvv5\" (UID: \"6b02dc71-0552-4bed-812f-b3b42e4e6e79\") " pod="calico-system/calico-typha-7694b7b655-gjvv5" Mar 6 01:42:52.833209 kubelet[2575]: I0306 01:42:52.828151 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6b02dc71-0552-4bed-812f-b3b42e4e6e79-typha-certs\") pod \"calico-typha-7694b7b655-gjvv5\" (UID: \"6b02dc71-0552-4bed-812f-b3b42e4e6e79\") " pod="calico-system/calico-typha-7694b7b655-gjvv5" Mar 6 01:42:52.833209 kubelet[2575]: I0306 01:42:52.828268 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p6qf\" (UniqueName: \"kubernetes.io/projected/6b02dc71-0552-4bed-812f-b3b42e4e6e79-kube-api-access-8p6qf\") pod \"calico-typha-7694b7b655-gjvv5\" (UID: \"6b02dc71-0552-4bed-812f-b3b42e4e6e79\") " pod="calico-system/calico-typha-7694b7b655-gjvv5" Mar 6 01:42:53.009873 systemd[1]: Created slice kubepods-besteffort-pod6b02dc71_0552_4bed_812f_b3b42e4e6e79.slice - libcontainer container kubepods-besteffort-pod6b02dc71_0552_4bed_812f_b3b42e4e6e79.slice. Mar 6 01:42:53.362102 systemd[1]: Created slice kubepods-besteffort-pod40eccf84_c922_4ecd_a525_60c3638ef8c1.slice - libcontainer container kubepods-besteffort-pod40eccf84_c922_4ecd_a525_60c3638ef8c1.slice. Mar 6 01:42:53.378817 kubelet[2575]: E0306 01:42:53.378745 2575 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:53.380392 containerd[1476]: time="2026-03-06T01:42:53.380325002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7694b7b655-gjvv5,Uid:6b02dc71-0552-4bed-812f-b3b42e4e6e79,Namespace:calico-system,Attempt:0,}" Mar 6 01:42:53.411955 kubelet[2575]: I0306 01:42:53.411610 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/40eccf84-c922-4ecd-a525-60c3638ef8c1-cni-net-dir\") pod \"calico-node-n8fv4\" (UID: \"40eccf84-c922-4ecd-a525-60c3638ef8c1\") " pod="calico-system/calico-node-n8fv4" Mar 6 01:42:53.411955 kubelet[2575]: I0306 01:42:53.411817 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/40eccf84-c922-4ecd-a525-60c3638ef8c1-bpffs\") pod \"calico-node-n8fv4\" (UID: \"40eccf84-c922-4ecd-a525-60c3638ef8c1\") " pod="calico-system/calico-node-n8fv4" Mar 6 01:42:53.411955 kubelet[2575]: I0306 01:42:53.411910 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/40eccf84-c922-4ecd-a525-60c3638ef8c1-lib-modules\") pod \"calico-node-n8fv4\" (UID: \"40eccf84-c922-4ecd-a525-60c3638ef8c1\") " pod="calico-system/calico-node-n8fv4" Mar 6 01:42:53.411955 kubelet[2575]: I0306 01:42:53.411925 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/40eccf84-c922-4ecd-a525-60c3638ef8c1-node-certs\") pod \"calico-node-n8fv4\" (UID: \"40eccf84-c922-4ecd-a525-60c3638ef8c1\") " pod="calico-system/calico-node-n8fv4" Mar 6 01:42:53.411955 kubelet[2575]: I0306 01:42:53.411941 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40eccf84-c922-4ecd-a525-60c3638ef8c1-tigera-ca-bundle\") pod \"calico-node-n8fv4\" (UID: \"40eccf84-c922-4ecd-a525-60c3638ef8c1\") " pod="calico-system/calico-node-n8fv4" Mar 6 01:42:53.414379 kubelet[2575]: I0306 01:42:53.411954 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/40eccf84-c922-4ecd-a525-60c3638ef8c1-var-run-calico\") pod \"calico-node-n8fv4\" (UID: \"40eccf84-c922-4ecd-a525-60c3638ef8c1\") " pod="calico-system/calico-node-n8fv4" Mar 6 01:42:53.414379 kubelet[2575]: I0306 01:42:53.411967 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/40eccf84-c922-4ecd-a525-60c3638ef8c1-xtables-lock\") pod \"calico-node-n8fv4\" (UID: \"40eccf84-c922-4ecd-a525-60c3638ef8c1\") " pod="calico-system/calico-node-n8fv4" Mar 6 01:42:53.414379 kubelet[2575]: I0306 01:42:53.411982 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/40eccf84-c922-4ecd-a525-60c3638ef8c1-policysync\") pod \"calico-node-n8fv4\" (UID: \"40eccf84-c922-4ecd-a525-60c3638ef8c1\") " pod="calico-system/calico-node-n8fv4" Mar 6 01:42:53.418787 kubelet[2575]: I0306 01:42:53.412090 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/40eccf84-c922-4ecd-a525-60c3638ef8c1-sys-fs\") pod \"calico-node-n8fv4\" (UID: \"40eccf84-c922-4ecd-a525-60c3638ef8c1\") " pod="calico-system/calico-node-n8fv4" Mar 6 01:42:53.418881 kubelet[2575]: I0306 01:42:53.418798 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/40eccf84-c922-4ecd-a525-60c3638ef8c1-flexvol-driver-host\") pod \"calico-node-n8fv4\" (UID: \"40eccf84-c922-4ecd-a525-60c3638ef8c1\") " pod="calico-system/calico-node-n8fv4" Mar 6 01:42:53.418881 kubelet[2575]: I0306 01:42:53.418840 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/40eccf84-c922-4ecd-a525-60c3638ef8c1-var-lib-calico\") pod \"calico-node-n8fv4\" (UID: \"40eccf84-c922-4ecd-a525-60c3638ef8c1\") " pod="calico-system/calico-node-n8fv4" Mar 6 01:42:53.418881 kubelet[2575]: I0306 01:42:53.418869 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/40eccf84-c922-4ecd-a525-60c3638ef8c1-cni-log-dir\") pod \"calico-node-n8fv4\" (UID: \"40eccf84-c922-4ecd-a525-60c3638ef8c1\") " pod="calico-system/calico-node-n8fv4" Mar 6 01:42:53.419336 kubelet[2575]: I0306 01:42:53.418895 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/40eccf84-c922-4ecd-a525-60c3638ef8c1-nodeproc\") pod \"calico-node-n8fv4\" (UID: \"40eccf84-c922-4ecd-a525-60c3638ef8c1\") " pod="calico-system/calico-node-n8fv4" Mar 6 01:42:53.419336 kubelet[2575]: I0306 01:42:53.418931 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/40eccf84-c922-4ecd-a525-60c3638ef8c1-cni-bin-dir\") pod \"calico-node-n8fv4\" (UID: \"40eccf84-c922-4ecd-a525-60c3638ef8c1\") " pod="calico-system/calico-node-n8fv4" Mar 6 01:42:53.419336 kubelet[2575]: I0306 01:42:53.418956 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p7dp\" (UniqueName: \"kubernetes.io/projected/40eccf84-c922-4ecd-a525-60c3638ef8c1-kube-api-access-2p7dp\") pod \"calico-node-n8fv4\" (UID: \"40eccf84-c922-4ecd-a525-60c3638ef8c1\") " pod="calico-system/calico-node-n8fv4" Mar 6 01:42:53.922348 kubelet[2575]: E0306 01:42:53.845110 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:54.010234 kubelet[2575]: W0306 01:42:53.941457 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:54.010234 kubelet[2575]: E0306 01:42:54.009943 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:54.022016 containerd[1476]: time="2026-03-06T01:42:54.011878076Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:42:54.022016 containerd[1476]: time="2026-03-06T01:42:54.012057467Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:42:54.022016 containerd[1476]: time="2026-03-06T01:42:54.012073247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:42:54.022016 containerd[1476]: time="2026-03-06T01:42:54.012465562Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:42:54.043435 kubelet[2575]: E0306 01:42:54.043360 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:54.043435 kubelet[2575]: W0306 01:42:54.043403 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:54.044001 kubelet[2575]: E0306 01:42:54.043475 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:54.107993 kubelet[2575]: E0306 01:42:54.105300 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:54.141892 kubelet[2575]: W0306 01:42:54.129258 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:54.161161 kubelet[2575]: E0306 01:42:54.160004 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.136074 systemd[1]: Started cri-containerd-f46cc93d0269a85fb3ab3ba2c90d59d0936ed1ce5a2351e8f6b343f8e0c749d9.scope - libcontainer container f46cc93d0269a85fb3ab3ba2c90d59d0936ed1ce5a2351e8f6b343f8e0c749d9. Mar 6 01:42:55.138908 containerd[1476]: time="2026-03-06T01:42:55.138872355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-n8fv4,Uid:40eccf84-c922-4ecd-a525-60c3638ef8c1,Namespace:calico-system,Attempt:0,}" Mar 6 01:42:55.247046 kubelet[2575]: E0306 01:42:55.246301 2575 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cv4hv" podUID="2ad6f57d-dd1e-42e4-a3fb-1ed523c0dea6" Mar 6 01:42:55.352889 kubelet[2575]: E0306 01:42:55.351680 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.352889 kubelet[2575]: W0306 01:42:55.352041 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.413369 kubelet[2575]: E0306 01:42:55.413054 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.417080 kubelet[2575]: E0306 01:42:55.417000 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.417080 kubelet[2575]: W0306 01:42:55.417044 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.417080 kubelet[2575]: E0306 01:42:55.417086 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.418019 kubelet[2575]: E0306 01:42:55.417453 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.418019 kubelet[2575]: W0306 01:42:55.417476 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.418019 kubelet[2575]: E0306 01:42:55.417493 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.418019 kubelet[2575]: E0306 01:42:55.417941 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.418019 kubelet[2575]: W0306 01:42:55.417951 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.418019 kubelet[2575]: E0306 01:42:55.417964 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.423782 kubelet[2575]: E0306 01:42:55.423486 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.423782 kubelet[2575]: W0306 01:42:55.423579 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.423782 kubelet[2575]: E0306 01:42:55.423606 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.432669 kubelet[2575]: E0306 01:42:55.432517 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.433279 kubelet[2575]: W0306 01:42:55.433100 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.433279 kubelet[2575]: E0306 01:42:55.433191 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.435839 kubelet[2575]: E0306 01:42:55.435776 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.435839 kubelet[2575]: W0306 01:42:55.435815 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.435839 kubelet[2575]: E0306 01:42:55.435840 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.436409 kubelet[2575]: E0306 01:42:55.436315 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.436409 kubelet[2575]: W0306 01:42:55.436349 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.436409 kubelet[2575]: E0306 01:42:55.436364 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.440581 kubelet[2575]: E0306 01:42:55.437354 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.440581 kubelet[2575]: W0306 01:42:55.437371 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.440581 kubelet[2575]: E0306 01:42:55.437485 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.440581 kubelet[2575]: E0306 01:42:55.438413 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.440581 kubelet[2575]: W0306 01:42:55.438425 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.440581 kubelet[2575]: E0306 01:42:55.438439 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.440581 kubelet[2575]: E0306 01:42:55.439066 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.440581 kubelet[2575]: W0306 01:42:55.439080 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.440581 kubelet[2575]: E0306 01:42:55.439211 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.441076 kubelet[2575]: E0306 01:42:55.441024 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.441076 kubelet[2575]: W0306 01:42:55.441040 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.441150 kubelet[2575]: E0306 01:42:55.441103 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.447689 kubelet[2575]: E0306 01:42:55.442220 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.447689 kubelet[2575]: W0306 01:42:55.442236 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.447689 kubelet[2575]: E0306 01:42:55.442252 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.447689 kubelet[2575]: E0306 01:42:55.442836 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.447689 kubelet[2575]: W0306 01:42:55.442848 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.447689 kubelet[2575]: E0306 01:42:55.442942 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.447689 kubelet[2575]: E0306 01:42:55.443694 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.447689 kubelet[2575]: W0306 01:42:55.443707 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.447689 kubelet[2575]: E0306 01:42:55.443721 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.447689 kubelet[2575]: E0306 01:42:55.444644 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.448044 kubelet[2575]: W0306 01:42:55.444656 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.448044 kubelet[2575]: E0306 01:42:55.444671 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.448044 kubelet[2575]: E0306 01:42:55.445397 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.448044 kubelet[2575]: W0306 01:42:55.445413 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.448044 kubelet[2575]: E0306 01:42:55.445427 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.448044 kubelet[2575]: E0306 01:42:55.446099 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.448044 kubelet[2575]: W0306 01:42:55.446113 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.448044 kubelet[2575]: E0306 01:42:55.446354 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.448044 kubelet[2575]: E0306 01:42:55.447317 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.448044 kubelet[2575]: W0306 01:42:55.447479 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.448255 kubelet[2575]: E0306 01:42:55.447710 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.448255 kubelet[2575]: E0306 01:42:55.448189 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.448255 kubelet[2575]: W0306 01:42:55.448200 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.448255 kubelet[2575]: E0306 01:42:55.448212 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.448832 kubelet[2575]: E0306 01:42:55.448800 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.448832 kubelet[2575]: W0306 01:42:55.448827 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.448928 kubelet[2575]: E0306 01:42:55.448840 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.448928 kubelet[2575]: I0306 01:42:55.448871 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2ad6f57d-dd1e-42e4-a3fb-1ed523c0dea6-kubelet-dir\") pod \"csi-node-driver-cv4hv\" (UID: \"2ad6f57d-dd1e-42e4-a3fb-1ed523c0dea6\") " pod="calico-system/csi-node-driver-cv4hv" Mar 6 01:42:55.449174 kubelet[2575]: E0306 01:42:55.449139 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.449214 kubelet[2575]: W0306 01:42:55.449172 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.449214 kubelet[2575]: E0306 01:42:55.449187 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.449250 kubelet[2575]: I0306 01:42:55.449238 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2ad6f57d-dd1e-42e4-a3fb-1ed523c0dea6-registration-dir\") pod \"csi-node-driver-cv4hv\" (UID: \"2ad6f57d-dd1e-42e4-a3fb-1ed523c0dea6\") " pod="calico-system/csi-node-driver-cv4hv" Mar 6 01:42:55.449730 kubelet[2575]: E0306 01:42:55.449700 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.449803 kubelet[2575]: W0306 01:42:55.449732 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.449803 kubelet[2575]: E0306 01:42:55.449785 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.450237 kubelet[2575]: E0306 01:42:55.450209 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.450237 kubelet[2575]: W0306 01:42:55.450235 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.450298 kubelet[2575]: E0306 01:42:55.450248 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.450735 kubelet[2575]: E0306 01:42:55.450708 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.450735 kubelet[2575]: W0306 01:42:55.450734 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.450827 kubelet[2575]: E0306 01:42:55.450786 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.450856 kubelet[2575]: I0306 01:42:55.450845 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/2ad6f57d-dd1e-42e4-a3fb-1ed523c0dea6-varrun\") pod \"csi-node-driver-cv4hv\" (UID: \"2ad6f57d-dd1e-42e4-a3fb-1ed523c0dea6\") " pod="calico-system/csi-node-driver-cv4hv" Mar 6 01:42:55.451367 kubelet[2575]: E0306 01:42:55.451326 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.451367 kubelet[2575]: W0306 01:42:55.451356 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.451479 kubelet[2575]: E0306 01:42:55.451371 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.451883 kubelet[2575]: I0306 01:42:55.451684 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlkkm\" (UniqueName: \"kubernetes.io/projected/2ad6f57d-dd1e-42e4-a3fb-1ed523c0dea6-kube-api-access-hlkkm\") pod \"csi-node-driver-cv4hv\" (UID: \"2ad6f57d-dd1e-42e4-a3fb-1ed523c0dea6\") " pod="calico-system/csi-node-driver-cv4hv" Mar 6 01:42:55.453174 kubelet[2575]: E0306 01:42:55.453045 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.453174 kubelet[2575]: W0306 01:42:55.453151 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.453174 kubelet[2575]: E0306 01:42:55.453172 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.453732 kubelet[2575]: E0306 01:42:55.453683 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.453732 kubelet[2575]: W0306 01:42:55.453717 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.453844 kubelet[2575]: E0306 01:42:55.453733 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.454677 kubelet[2575]: E0306 01:42:55.454632 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.454677 kubelet[2575]: W0306 01:42:55.454665 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.454858 kubelet[2575]: E0306 01:42:55.454770 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.456090 kubelet[2575]: E0306 01:42:55.455995 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.456090 kubelet[2575]: W0306 01:42:55.456035 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.456090 kubelet[2575]: E0306 01:42:55.456052 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.456281 kubelet[2575]: I0306 01:42:55.456244 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2ad6f57d-dd1e-42e4-a3fb-1ed523c0dea6-socket-dir\") pod \"csi-node-driver-cv4hv\" (UID: \"2ad6f57d-dd1e-42e4-a3fb-1ed523c0dea6\") " pod="calico-system/csi-node-driver-cv4hv" Mar 6 01:42:55.456716 kubelet[2575]: E0306 01:42:55.456682 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.456716 kubelet[2575]: W0306 01:42:55.456698 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.456716 kubelet[2575]: E0306 01:42:55.456712 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.457878 kubelet[2575]: E0306 01:42:55.457825 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.457878 kubelet[2575]: W0306 01:42:55.457862 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.457878 kubelet[2575]: E0306 01:42:55.457879 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.459494 kubelet[2575]: E0306 01:42:55.459470 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.459494 kubelet[2575]: W0306 01:42:55.459488 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.459697 kubelet[2575]: E0306 01:42:55.459504 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.461231 kubelet[2575]: E0306 01:42:55.461196 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.461231 kubelet[2575]: W0306 01:42:55.461229 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.461737 kubelet[2575]: E0306 01:42:55.461248 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.463702 kubelet[2575]: E0306 01:42:55.462241 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.463702 kubelet[2575]: W0306 01:42:55.462254 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.463702 kubelet[2575]: E0306 01:42:55.462266 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.504983 containerd[1476]: time="2026-03-06T01:42:55.504846833Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:42:55.507585 containerd[1476]: time="2026-03-06T01:42:55.505018580Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:42:55.507585 containerd[1476]: time="2026-03-06T01:42:55.505198122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:42:55.507585 containerd[1476]: time="2026-03-06T01:42:55.505640331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:42:55.541708 containerd[1476]: time="2026-03-06T01:42:55.541585075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7694b7b655-gjvv5,Uid:6b02dc71-0552-4bed-812f-b3b42e4e6e79,Namespace:calico-system,Attempt:0,} returns sandbox id \"f46cc93d0269a85fb3ab3ba2c90d59d0936ed1ce5a2351e8f6b343f8e0c749d9\"" Mar 6 01:42:55.544378 kubelet[2575]: E0306 01:42:55.543680 2575 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:55.547393 containerd[1476]: time="2026-03-06T01:42:55.547260348Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 6 01:42:55.560186 systemd[1]: Started cri-containerd-56d134d04bcd7e6d19eacb72d390cac1c7462f36383f1c922d68c24a6b9f17b9.scope - libcontainer container 56d134d04bcd7e6d19eacb72d390cac1c7462f36383f1c922d68c24a6b9f17b9. Mar 6 01:42:55.565014 kubelet[2575]: E0306 01:42:55.564928 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.565014 kubelet[2575]: W0306 01:42:55.564951 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.565014 kubelet[2575]: E0306 01:42:55.564976 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.566514 kubelet[2575]: E0306 01:42:55.566041 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.566514 kubelet[2575]: W0306 01:42:55.566144 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.566514 kubelet[2575]: E0306 01:42:55.566158 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.568370 kubelet[2575]: E0306 01:42:55.567071 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.568370 kubelet[2575]: W0306 01:42:55.567084 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.568370 kubelet[2575]: E0306 01:42:55.567094 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.570373 kubelet[2575]: E0306 01:42:55.570196 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.570373 kubelet[2575]: W0306 01:42:55.570213 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.570373 kubelet[2575]: E0306 01:42:55.570225 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.571346 kubelet[2575]: E0306 01:42:55.571261 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.571393 kubelet[2575]: W0306 01:42:55.571350 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.571393 kubelet[2575]: E0306 01:42:55.571367 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.572471 kubelet[2575]: E0306 01:42:55.572428 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.572471 kubelet[2575]: W0306 01:42:55.572439 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.572471 kubelet[2575]: E0306 01:42:55.572451 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.573162 kubelet[2575]: E0306 01:42:55.573052 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.573162 kubelet[2575]: W0306 01:42:55.573162 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.573246 kubelet[2575]: E0306 01:42:55.573173 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.575139 kubelet[2575]: E0306 01:42:55.574708 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.575139 kubelet[2575]: W0306 01:42:55.574720 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.575139 kubelet[2575]: E0306 01:42:55.574730 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.588373 kubelet[2575]: E0306 01:42:55.588074 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.588373 kubelet[2575]: W0306 01:42:55.588091 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.588373 kubelet[2575]: E0306 01:42:55.588104 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.588655 kubelet[2575]: E0306 01:42:55.588615 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.588655 kubelet[2575]: W0306 01:42:55.588646 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.588655 kubelet[2575]: E0306 01:42:55.588657 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.589377 kubelet[2575]: E0306 01:42:55.589294 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.589377 kubelet[2575]: W0306 01:42:55.589324 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.589377 kubelet[2575]: E0306 01:42:55.589338 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.589907 kubelet[2575]: E0306 01:42:55.589849 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.589907 kubelet[2575]: W0306 01:42:55.589880 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.589907 kubelet[2575]: E0306 01:42:55.589896 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.590375 kubelet[2575]: E0306 01:42:55.590313 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.590375 kubelet[2575]: W0306 01:42:55.590351 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.590375 kubelet[2575]: E0306 01:42:55.590368 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.590894 kubelet[2575]: E0306 01:42:55.590842 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.590894 kubelet[2575]: W0306 01:42:55.590871 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.590894 kubelet[2575]: E0306 01:42:55.590881 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.591504 kubelet[2575]: E0306 01:42:55.591448 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.591504 kubelet[2575]: W0306 01:42:55.591480 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.591504 kubelet[2575]: E0306 01:42:55.591491 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.591961 kubelet[2575]: E0306 01:42:55.591909 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.591961 kubelet[2575]: W0306 01:42:55.591941 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.592028 kubelet[2575]: E0306 01:42:55.591998 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.592895 kubelet[2575]: E0306 01:42:55.592850 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.592895 kubelet[2575]: W0306 01:42:55.592891 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.592999 kubelet[2575]: E0306 01:42:55.592920 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.593634 kubelet[2575]: E0306 01:42:55.593595 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.593634 kubelet[2575]: W0306 01:42:55.593622 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.593693 kubelet[2575]: E0306 01:42:55.593635 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.594077 kubelet[2575]: E0306 01:42:55.594040 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.594077 kubelet[2575]: W0306 01:42:55.594069 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.594183 kubelet[2575]: E0306 01:42:55.594082 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.595612 kubelet[2575]: E0306 01:42:55.594671 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.595612 kubelet[2575]: W0306 01:42:55.594687 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.595612 kubelet[2575]: E0306 01:42:55.594783 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.595612 kubelet[2575]: E0306 01:42:55.595444 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.595612 kubelet[2575]: W0306 01:42:55.595454 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.595612 kubelet[2575]: E0306 01:42:55.595466 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.596218 kubelet[2575]: E0306 01:42:55.596157 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.596218 kubelet[2575]: W0306 01:42:55.596195 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.596218 kubelet[2575]: E0306 01:42:55.596208 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.596907 kubelet[2575]: E0306 01:42:55.596855 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.596907 kubelet[2575]: W0306 01:42:55.596888 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.596907 kubelet[2575]: E0306 01:42:55.596901 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.597574 kubelet[2575]: E0306 01:42:55.597485 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.597574 kubelet[2575]: W0306 01:42:55.597563 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.597626 kubelet[2575]: E0306 01:42:55.597577 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.598614 kubelet[2575]: E0306 01:42:55.598261 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.598614 kubelet[2575]: W0306 01:42:55.598277 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.598614 kubelet[2575]: E0306 01:42:55.598289 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.607134 kubelet[2575]: E0306 01:42:55.606991 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:55.607264 kubelet[2575]: W0306 01:42:55.607111 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:55.607264 kubelet[2575]: E0306 01:42:55.607202 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:55.626611 containerd[1476]: time="2026-03-06T01:42:55.624682571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-n8fv4,Uid:40eccf84-c922-4ecd-a525-60c3638ef8c1,Namespace:calico-system,Attempt:0,} returns sandbox id \"56d134d04bcd7e6d19eacb72d390cac1c7462f36383f1c922d68c24a6b9f17b9\"" Mar 6 01:42:56.513046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount365438555.mount: Deactivated successfully. Mar 6 01:42:57.151401 kubelet[2575]: E0306 01:42:57.150627 2575 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cv4hv" podUID="2ad6f57d-dd1e-42e4-a3fb-1ed523c0dea6" Mar 6 01:42:58.766223 containerd[1476]: time="2026-03-06T01:42:58.766148355Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:42:58.767365 containerd[1476]: time="2026-03-06T01:42:58.767245959Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Mar 6 01:42:58.768656 containerd[1476]: time="2026-03-06T01:42:58.768590166Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:42:58.771285 containerd[1476]: time="2026-03-06T01:42:58.771226581Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:42:58.772088 containerd[1476]: time="2026-03-06T01:42:58.772049240Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 3.224726815s" Mar 6 01:42:58.772139 containerd[1476]: time="2026-03-06T01:42:58.772094616Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 6 01:42:58.773772 containerd[1476]: time="2026-03-06T01:42:58.773709441Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 6 01:42:58.787912 containerd[1476]: time="2026-03-06T01:42:58.787814993Z" level=info msg="CreateContainer within sandbox \"f46cc93d0269a85fb3ab3ba2c90d59d0936ed1ce5a2351e8f6b343f8e0c749d9\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 6 01:42:58.808212 containerd[1476]: time="2026-03-06T01:42:58.808115238Z" level=info msg="CreateContainer within sandbox \"f46cc93d0269a85fb3ab3ba2c90d59d0936ed1ce5a2351e8f6b343f8e0c749d9\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c6a552c1108484b7dcb481fe9395a783c2d2244c7f319ed72480695dfa69b531\"" Mar 6 01:42:58.808755 containerd[1476]: time="2026-03-06T01:42:58.808699256Z" level=info msg="StartContainer for \"c6a552c1108484b7dcb481fe9395a783c2d2244c7f319ed72480695dfa69b531\"" Mar 6 01:42:58.851735 systemd[1]: Started cri-containerd-c6a552c1108484b7dcb481fe9395a783c2d2244c7f319ed72480695dfa69b531.scope - libcontainer container c6a552c1108484b7dcb481fe9395a783c2d2244c7f319ed72480695dfa69b531. Mar 6 01:42:58.919218 containerd[1476]: time="2026-03-06T01:42:58.919149275Z" level=info msg="StartContainer for \"c6a552c1108484b7dcb481fe9395a783c2d2244c7f319ed72480695dfa69b531\" returns successfully" Mar 6 01:42:59.165916 kubelet[2575]: E0306 01:42:59.165649 2575 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cv4hv" podUID="2ad6f57d-dd1e-42e4-a3fb-1ed523c0dea6" Mar 6 01:42:59.512133 kubelet[2575]: E0306 01:42:59.511979 2575 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:42:59.527056 kubelet[2575]: I0306 01:42:59.526856 2575 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-7694b7b655-gjvv5" podStartSLOduration=4.299042426 podStartE2EDuration="7.52683368s" podCreationTimestamp="2026-03-06 01:42:52 +0000 UTC" firstStartedPulling="2026-03-06 01:42:55.54568815 +0000 UTC m=+21.082292185" lastFinishedPulling="2026-03-06 01:42:58.773479394 +0000 UTC m=+24.310083439" observedRunningTime="2026-03-06 01:42:59.526808368 +0000 UTC m=+25.063412403" watchObservedRunningTime="2026-03-06 01:42:59.52683368 +0000 UTC m=+25.063437735" Mar 6 01:42:59.550024 kubelet[2575]: E0306 01:42:59.549798 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:59.550024 kubelet[2575]: W0306 01:42:59.549826 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:59.550024 kubelet[2575]: E0306 01:42:59.549853 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:59.552022 kubelet[2575]: E0306 01:42:59.551834 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:59.552022 kubelet[2575]: W0306 01:42:59.551854 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:59.552022 kubelet[2575]: E0306 01:42:59.551874 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:59.553635 kubelet[2575]: E0306 01:42:59.553368 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:59.553635 kubelet[2575]: W0306 01:42:59.553384 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:59.553635 kubelet[2575]: E0306 01:42:59.553399 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:59.553870 kubelet[2575]: E0306 01:42:59.553855 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:59.554141 kubelet[2575]: W0306 01:42:59.553994 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:59.554141 kubelet[2575]: E0306 01:42:59.554017 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:59.554463 kubelet[2575]: E0306 01:42:59.554429 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:59.554463 kubelet[2575]: W0306 01:42:59.554454 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:59.554463 kubelet[2575]: E0306 01:42:59.554464 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:59.555220 kubelet[2575]: E0306 01:42:59.555042 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:59.555220 kubelet[2575]: W0306 01:42:59.555054 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:59.555220 kubelet[2575]: E0306 01:42:59.555066 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:59.561754 kubelet[2575]: E0306 01:42:59.559678 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:59.561754 kubelet[2575]: W0306 01:42:59.559697 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:59.561754 kubelet[2575]: E0306 01:42:59.559716 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:59.561754 kubelet[2575]: E0306 01:42:59.560081 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:59.561754 kubelet[2575]: W0306 01:42:59.560092 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:59.561754 kubelet[2575]: E0306 01:42:59.560160 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:59.561754 kubelet[2575]: E0306 01:42:59.560640 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:59.561754 kubelet[2575]: W0306 01:42:59.560656 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:59.561754 kubelet[2575]: E0306 01:42:59.561063 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:59.561754 kubelet[2575]: E0306 01:42:59.561404 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:59.562500 kubelet[2575]: W0306 01:42:59.561415 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:59.562500 kubelet[2575]: E0306 01:42:59.561427 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:59.562500 kubelet[2575]: E0306 01:42:59.562445 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:59.562500 kubelet[2575]: W0306 01:42:59.562456 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:59.562500 kubelet[2575]: E0306 01:42:59.562469 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:59.563417 kubelet[2575]: E0306 01:42:59.563353 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:59.563417 kubelet[2575]: W0306 01:42:59.563390 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:59.563417 kubelet[2575]: E0306 01:42:59.563404 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:59.563854 kubelet[2575]: E0306 01:42:59.563790 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:59.563854 kubelet[2575]: W0306 01:42:59.563826 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:59.563854 kubelet[2575]: E0306 01:42:59.563839 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:59.564242 kubelet[2575]: E0306 01:42:59.564200 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:59.564242 kubelet[2575]: W0306 01:42:59.564230 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:59.564242 kubelet[2575]: E0306 01:42:59.564243 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:59.564738 kubelet[2575]: E0306 01:42:59.564600 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:59.564738 kubelet[2575]: W0306 01:42:59.564612 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:59.564738 kubelet[2575]: E0306 01:42:59.564622 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:59.646224 kubelet[2575]: E0306 01:42:59.646165 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:59.646224 kubelet[2575]: W0306 01:42:59.646188 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:59.646224 kubelet[2575]: E0306 01:42:59.646213 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:59.647089 kubelet[2575]: E0306 01:42:59.646691 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:59.647089 kubelet[2575]: W0306 01:42:59.646703 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:59.647089 kubelet[2575]: E0306 01:42:59.646717 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:59.647217 kubelet[2575]: E0306 01:42:59.647120 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:59.647217 kubelet[2575]: W0306 01:42:59.647130 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:59.647217 kubelet[2575]: E0306 01:42:59.647141 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:59.647593 kubelet[2575]: E0306 01:42:59.647505 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:59.647593 kubelet[2575]: W0306 01:42:59.647574 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:59.647593 kubelet[2575]: E0306 01:42:59.647588 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:59.648033 kubelet[2575]: E0306 01:42:59.647932 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:59.648033 kubelet[2575]: W0306 01:42:59.647963 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:59.648033 kubelet[2575]: E0306 01:42:59.647975 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:59.648348 kubelet[2575]: E0306 01:42:59.648304 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:59.648348 kubelet[2575]: W0306 01:42:59.648326 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:59.648348 kubelet[2575]: E0306 01:42:59.648337 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:59.648870 kubelet[2575]: E0306 01:42:59.648801 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:59.648870 kubelet[2575]: W0306 01:42:59.648828 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:59.648870 kubelet[2575]: E0306 01:42:59.648840 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:59.649251 kubelet[2575]: E0306 01:42:59.649183 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:59.649251 kubelet[2575]: W0306 01:42:59.649208 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:59.649251 kubelet[2575]: E0306 01:42:59.649218 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:59.649673 kubelet[2575]: E0306 01:42:59.649619 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:59.649673 kubelet[2575]: W0306 01:42:59.649647 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:59.649673 kubelet[2575]: E0306 01:42:59.649661 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:59.650229 kubelet[2575]: E0306 01:42:59.650198 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:59.650273 kubelet[2575]: W0306 01:42:59.650231 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:59.650273 kubelet[2575]: E0306 01:42:59.650249 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:59.650653 kubelet[2575]: E0306 01:42:59.650629 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:59.650703 kubelet[2575]: W0306 01:42:59.650655 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:59.650754 kubelet[2575]: E0306 01:42:59.650668 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:59.651197 kubelet[2575]: E0306 01:42:59.651168 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:59.651233 kubelet[2575]: W0306 01:42:59.651198 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:59.651233 kubelet[2575]: E0306 01:42:59.651214 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:59.651728 kubelet[2575]: E0306 01:42:59.651664 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:59.651728 kubelet[2575]: W0306 01:42:59.651694 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:59.651728 kubelet[2575]: E0306 01:42:59.651707 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:59.652187 kubelet[2575]: E0306 01:42:59.652138 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:59.652187 kubelet[2575]: W0306 01:42:59.652169 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:59.652187 kubelet[2575]: E0306 01:42:59.652185 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:59.652880 kubelet[2575]: E0306 01:42:59.652855 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:59.652880 kubelet[2575]: W0306 01:42:59.652878 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:59.653010 kubelet[2575]: E0306 01:42:59.652933 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:59.653368 kubelet[2575]: E0306 01:42:59.653315 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:59.653368 kubelet[2575]: W0306 01:42:59.653352 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:59.653368 kubelet[2575]: E0306 01:42:59.653366 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:59.653764 kubelet[2575]: E0306 01:42:59.653736 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:59.653818 kubelet[2575]: W0306 01:42:59.653762 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:59.653818 kubelet[2575]: E0306 01:42:59.653778 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:42:59.654245 kubelet[2575]: E0306 01:42:59.654203 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:42:59.654245 kubelet[2575]: W0306 01:42:59.654233 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:42:59.654339 kubelet[2575]: E0306 01:42:59.654249 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:00.445271 containerd[1476]: time="2026-03-06T01:43:00.445195657Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:00.446294 containerd[1476]: time="2026-03-06T01:43:00.446200664Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Mar 6 01:43:00.447845 containerd[1476]: time="2026-03-06T01:43:00.447786121Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:00.451770 containerd[1476]: time="2026-03-06T01:43:00.451705888Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:00.452582 containerd[1476]: time="2026-03-06T01:43:00.452362682Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.678606803s" Mar 6 01:43:00.452582 containerd[1476]: time="2026-03-06T01:43:00.452475306Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 6 01:43:00.458479 containerd[1476]: time="2026-03-06T01:43:00.458423189Z" level=info msg="CreateContainer within sandbox \"56d134d04bcd7e6d19eacb72d390cac1c7462f36383f1c922d68c24a6b9f17b9\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 6 01:43:00.496017 containerd[1476]: time="2026-03-06T01:43:00.495873515Z" level=info msg="CreateContainer within sandbox \"56d134d04bcd7e6d19eacb72d390cac1c7462f36383f1c922d68c24a6b9f17b9\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5a00c22ec3000387e848103bdc9e4076dbe1f61873fe11d825428f2e6c99ba21\"" Mar 6 01:43:00.497865 containerd[1476]: time="2026-03-06T01:43:00.496834067Z" level=info msg="StartContainer for \"5a00c22ec3000387e848103bdc9e4076dbe1f61873fe11d825428f2e6c99ba21\"" Mar 6 01:43:00.517008 kubelet[2575]: E0306 01:43:00.516977 2575 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:43:00.556760 systemd[1]: Started cri-containerd-5a00c22ec3000387e848103bdc9e4076dbe1f61873fe11d825428f2e6c99ba21.scope - libcontainer container 5a00c22ec3000387e848103bdc9e4076dbe1f61873fe11d825428f2e6c99ba21. Mar 6 01:43:00.575282 kubelet[2575]: E0306 01:43:00.575137 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:00.575282 kubelet[2575]: W0306 01:43:00.575175 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:00.575282 kubelet[2575]: E0306 01:43:00.575200 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:00.575819 kubelet[2575]: E0306 01:43:00.575756 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:00.575819 kubelet[2575]: W0306 01:43:00.575792 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:00.575819 kubelet[2575]: E0306 01:43:00.575811 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:00.577451 kubelet[2575]: E0306 01:43:00.576724 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:00.577451 kubelet[2575]: W0306 01:43:00.576740 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:00.577451 kubelet[2575]: E0306 01:43:00.576815 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:00.577451 kubelet[2575]: E0306 01:43:00.577315 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:00.577451 kubelet[2575]: W0306 01:43:00.577324 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:00.577451 kubelet[2575]: E0306 01:43:00.577334 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:00.577747 kubelet[2575]: E0306 01:43:00.577721 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:00.577747 kubelet[2575]: W0306 01:43:00.577730 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:00.577747 kubelet[2575]: E0306 01:43:00.577740 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:00.578109 kubelet[2575]: E0306 01:43:00.578076 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:00.578109 kubelet[2575]: W0306 01:43:00.578100 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:00.578109 kubelet[2575]: E0306 01:43:00.578110 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:00.578471 kubelet[2575]: E0306 01:43:00.578442 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:00.578471 kubelet[2575]: W0306 01:43:00.578453 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:00.578471 kubelet[2575]: E0306 01:43:00.578462 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:00.579395 kubelet[2575]: E0306 01:43:00.579334 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:00.579395 kubelet[2575]: W0306 01:43:00.579366 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:00.579481 kubelet[2575]: E0306 01:43:00.579412 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:00.580075 kubelet[2575]: E0306 01:43:00.580015 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:00.580075 kubelet[2575]: W0306 01:43:00.580047 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:00.580148 kubelet[2575]: E0306 01:43:00.580118 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:00.580648 kubelet[2575]: E0306 01:43:00.580604 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:00.581658 kubelet[2575]: W0306 01:43:00.580764 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:00.581658 kubelet[2575]: E0306 01:43:00.580781 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:00.581658 kubelet[2575]: E0306 01:43:00.581269 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:00.581658 kubelet[2575]: W0306 01:43:00.581278 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:00.581658 kubelet[2575]: E0306 01:43:00.581288 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:00.581859 kubelet[2575]: E0306 01:43:00.581785 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:00.581859 kubelet[2575]: W0306 01:43:00.581796 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:00.581859 kubelet[2575]: E0306 01:43:00.581806 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:00.582266 kubelet[2575]: E0306 01:43:00.582208 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:00.582451 kubelet[2575]: W0306 01:43:00.582394 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:00.582451 kubelet[2575]: E0306 01:43:00.582432 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:00.582840 kubelet[2575]: E0306 01:43:00.582775 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:00.582840 kubelet[2575]: W0306 01:43:00.582807 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:00.582840 kubelet[2575]: E0306 01:43:00.582820 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:00.583302 kubelet[2575]: E0306 01:43:00.583226 2575 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:43:00.583302 kubelet[2575]: W0306 01:43:00.583261 2575 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:43:00.583302 kubelet[2575]: E0306 01:43:00.583271 2575 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:43:00.606345 containerd[1476]: time="2026-03-06T01:43:00.606271872Z" level=info msg="StartContainer for \"5a00c22ec3000387e848103bdc9e4076dbe1f61873fe11d825428f2e6c99ba21\" returns successfully" Mar 6 01:43:00.634450 systemd[1]: cri-containerd-5a00c22ec3000387e848103bdc9e4076dbe1f61873fe11d825428f2e6c99ba21.scope: Deactivated successfully. Mar 6 01:43:00.689262 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a00c22ec3000387e848103bdc9e4076dbe1f61873fe11d825428f2e6c99ba21-rootfs.mount: Deactivated successfully. Mar 6 01:43:00.777375 containerd[1476]: time="2026-03-06T01:43:00.777233325Z" level=info msg="shim disconnected" id=5a00c22ec3000387e848103bdc9e4076dbe1f61873fe11d825428f2e6c99ba21 namespace=k8s.io Mar 6 01:43:00.777375 containerd[1476]: time="2026-03-06T01:43:00.777354085Z" level=warning msg="cleaning up after shim disconnected" id=5a00c22ec3000387e848103bdc9e4076dbe1f61873fe11d825428f2e6c99ba21 namespace=k8s.io Mar 6 01:43:00.777375 containerd[1476]: time="2026-03-06T01:43:00.777363772Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 6 01:43:01.149845 kubelet[2575]: E0306 01:43:01.149640 2575 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cv4hv" podUID="2ad6f57d-dd1e-42e4-a3fb-1ed523c0dea6" Mar 6 01:43:01.521291 kubelet[2575]: E0306 01:43:01.521181 2575 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:43:01.522994 containerd[1476]: time="2026-03-06T01:43:01.522909580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 6 01:43:03.149860 kubelet[2575]: E0306 01:43:03.149778 2575 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cv4hv" podUID="2ad6f57d-dd1e-42e4-a3fb-1ed523c0dea6" Mar 6 01:43:05.182700 kubelet[2575]: E0306 01:43:05.181423 2575 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cv4hv" podUID="2ad6f57d-dd1e-42e4-a3fb-1ed523c0dea6" Mar 6 01:43:05.367357 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1005879641.mount: Deactivated successfully. Mar 6 01:43:05.640601 containerd[1476]: time="2026-03-06T01:43:05.640277004Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 6 01:43:05.651906 containerd[1476]: time="2026-03-06T01:43:05.651823186Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 4.128861077s" Mar 6 01:43:05.651906 containerd[1476]: time="2026-03-06T01:43:05.651896204Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 6 01:43:05.652989 containerd[1476]: time="2026-03-06T01:43:05.652915200Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:05.670005 containerd[1476]: time="2026-03-06T01:43:05.669902156Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:05.671431 containerd[1476]: time="2026-03-06T01:43:05.671313580Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:05.674303 containerd[1476]: time="2026-03-06T01:43:05.674234771Z" level=info msg="CreateContainer within sandbox \"56d134d04bcd7e6d19eacb72d390cac1c7462f36383f1c922d68c24a6b9f17b9\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 6 01:43:05.852005 containerd[1476]: time="2026-03-06T01:43:05.851909581Z" level=info msg="CreateContainer within sandbox \"56d134d04bcd7e6d19eacb72d390cac1c7462f36383f1c922d68c24a6b9f17b9\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"5f44adc86377b707a2a85c7bf2b2dd5270c20d416576ed40fb96c3f2c8922234\"" Mar 6 01:43:05.852972 containerd[1476]: time="2026-03-06T01:43:05.852929796Z" level=info msg="StartContainer for \"5f44adc86377b707a2a85c7bf2b2dd5270c20d416576ed40fb96c3f2c8922234\"" Mar 6 01:43:05.939808 systemd[1]: Started cri-containerd-5f44adc86377b707a2a85c7bf2b2dd5270c20d416576ed40fb96c3f2c8922234.scope - libcontainer container 5f44adc86377b707a2a85c7bf2b2dd5270c20d416576ed40fb96c3f2c8922234. Mar 6 01:43:06.000279 containerd[1476]: time="2026-03-06T01:43:06.000217757Z" level=info msg="StartContainer for \"5f44adc86377b707a2a85c7bf2b2dd5270c20d416576ed40fb96c3f2c8922234\" returns successfully" Mar 6 01:43:06.081320 systemd[1]: cri-containerd-5f44adc86377b707a2a85c7bf2b2dd5270c20d416576ed40fb96c3f2c8922234.scope: Deactivated successfully. Mar 6 01:43:06.217896 containerd[1476]: time="2026-03-06T01:43:06.217428252Z" level=info msg="shim disconnected" id=5f44adc86377b707a2a85c7bf2b2dd5270c20d416576ed40fb96c3f2c8922234 namespace=k8s.io Mar 6 01:43:06.217896 containerd[1476]: time="2026-03-06T01:43:06.217511911Z" level=warning msg="cleaning up after shim disconnected" id=5f44adc86377b707a2a85c7bf2b2dd5270c20d416576ed40fb96c3f2c8922234 namespace=k8s.io Mar 6 01:43:06.217896 containerd[1476]: time="2026-03-06T01:43:06.217572095Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 6 01:43:06.367490 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f44adc86377b707a2a85c7bf2b2dd5270c20d416576ed40fb96c3f2c8922234-rootfs.mount: Deactivated successfully. Mar 6 01:43:06.536166 containerd[1476]: time="2026-03-06T01:43:06.536040865Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 6 01:43:07.150448 kubelet[2575]: E0306 01:43:07.150385 2575 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cv4hv" podUID="2ad6f57d-dd1e-42e4-a3fb-1ed523c0dea6" Mar 6 01:43:08.417678 containerd[1476]: time="2026-03-06T01:43:08.417591859Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:08.419014 containerd[1476]: time="2026-03-06T01:43:08.418920736Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 6 01:43:08.420397 containerd[1476]: time="2026-03-06T01:43:08.420340843Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:08.423298 containerd[1476]: time="2026-03-06T01:43:08.423233759Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:08.424963 containerd[1476]: time="2026-03-06T01:43:08.424895962Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 1.888787398s" Mar 6 01:43:08.424963 containerd[1476]: time="2026-03-06T01:43:08.424960433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 6 01:43:08.433170 containerd[1476]: time="2026-03-06T01:43:08.433018080Z" level=info msg="CreateContainer within sandbox \"56d134d04bcd7e6d19eacb72d390cac1c7462f36383f1c922d68c24a6b9f17b9\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 6 01:43:08.453828 containerd[1476]: time="2026-03-06T01:43:08.453740154Z" level=info msg="CreateContainer within sandbox \"56d134d04bcd7e6d19eacb72d390cac1c7462f36383f1c922d68c24a6b9f17b9\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7295e4c8f08db3c97aaae72cee031dce121fb86453aabeea7cb9f97afcc6d7ec\"" Mar 6 01:43:08.454575 containerd[1476]: time="2026-03-06T01:43:08.454424250Z" level=info msg="StartContainer for \"7295e4c8f08db3c97aaae72cee031dce121fb86453aabeea7cb9f97afcc6d7ec\"" Mar 6 01:43:08.526875 systemd[1]: Started cri-containerd-7295e4c8f08db3c97aaae72cee031dce121fb86453aabeea7cb9f97afcc6d7ec.scope - libcontainer container 7295e4c8f08db3c97aaae72cee031dce121fb86453aabeea7cb9f97afcc6d7ec. Mar 6 01:43:08.576024 containerd[1476]: time="2026-03-06T01:43:08.575856688Z" level=info msg="StartContainer for \"7295e4c8f08db3c97aaae72cee031dce121fb86453aabeea7cb9f97afcc6d7ec\" returns successfully" Mar 6 01:43:09.152164 kubelet[2575]: E0306 01:43:09.152082 2575 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cv4hv" podUID="2ad6f57d-dd1e-42e4-a3fb-1ed523c0dea6" Mar 6 01:43:09.294375 systemd[1]: cri-containerd-7295e4c8f08db3c97aaae72cee031dce121fb86453aabeea7cb9f97afcc6d7ec.scope: Deactivated successfully. Mar 6 01:43:09.329283 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7295e4c8f08db3c97aaae72cee031dce121fb86453aabeea7cb9f97afcc6d7ec-rootfs.mount: Deactivated successfully. Mar 6 01:43:09.330999 kubelet[2575]: I0306 01:43:09.329983 2575 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Mar 6 01:43:09.365447 containerd[1476]: time="2026-03-06T01:43:09.365304875Z" level=info msg="shim disconnected" id=7295e4c8f08db3c97aaae72cee031dce121fb86453aabeea7cb9f97afcc6d7ec namespace=k8s.io Mar 6 01:43:09.365447 containerd[1476]: time="2026-03-06T01:43:09.365374086Z" level=warning msg="cleaning up after shim disconnected" id=7295e4c8f08db3c97aaae72cee031dce121fb86453aabeea7cb9f97afcc6d7ec namespace=k8s.io Mar 6 01:43:09.365447 containerd[1476]: time="2026-03-06T01:43:09.365389254Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 6 01:43:09.409180 systemd[1]: Created slice kubepods-besteffort-podfed33a75_c212_47b5_8552_87790ea76352.slice - libcontainer container kubepods-besteffort-podfed33a75_c212_47b5_8552_87790ea76352.slice. Mar 6 01:43:09.427948 systemd[1]: Created slice kubepods-besteffort-pod57e3989c_45dd_43eb_a32d_f906d625c6d0.slice - libcontainer container kubepods-besteffort-pod57e3989c_45dd_43eb_a32d_f906d625c6d0.slice. Mar 6 01:43:09.445943 systemd[1]: Created slice kubepods-besteffort-pod763ef55a_80e2_49cd_8289_67ad48649a32.slice - libcontainer container kubepods-besteffort-pod763ef55a_80e2_49cd_8289_67ad48649a32.slice. Mar 6 01:43:09.462973 systemd[1]: Created slice kubepods-burstable-pod90fa09d7_47f2_40ee_97f2_b3546e0b3fa4.slice - libcontainer container kubepods-burstable-pod90fa09d7_47f2_40ee_97f2_b3546e0b3fa4.slice. Mar 6 01:43:09.474731 systemd[1]: Created slice kubepods-besteffort-podae33386a_6914_457b_a4b5_24f516f2dd38.slice - libcontainer container kubepods-besteffort-podae33386a_6914_457b_a4b5_24f516f2dd38.slice. Mar 6 01:43:09.485023 systemd[1]: Created slice kubepods-besteffort-poded20f622_f835_462f_a791_b755c458eb23.slice - libcontainer container kubepods-besteffort-poded20f622_f835_462f_a791_b755c458eb23.slice. Mar 6 01:43:09.492262 systemd[1]: Created slice kubepods-burstable-pod90ab0297_0497_4e43_9006_2a17f86d78c6.slice - libcontainer container kubepods-burstable-pod90ab0297_0497_4e43_9006_2a17f86d78c6.slice. Mar 6 01:43:09.533966 kubelet[2575]: I0306 01:43:09.533874 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/763ef55a-80e2-49cd-8289-67ad48649a32-config\") pod \"goldmane-9f7667bb8-5c9lf\" (UID: \"763ef55a-80e2-49cd-8289-67ad48649a32\") " pod="calico-system/goldmane-9f7667bb8-5c9lf" Mar 6 01:43:09.533966 kubelet[2575]: I0306 01:43:09.533960 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90ab0297-0497-4e43-9006-2a17f86d78c6-config-volume\") pod \"coredns-7d764666f9-wrbw9\" (UID: \"90ab0297-0497-4e43-9006-2a17f86d78c6\") " pod="kube-system/coredns-7d764666f9-wrbw9" Mar 6 01:43:09.534328 kubelet[2575]: I0306 01:43:09.533990 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tsrk\" (UniqueName: \"kubernetes.io/projected/90ab0297-0497-4e43-9006-2a17f86d78c6-kube-api-access-7tsrk\") pod \"coredns-7d764666f9-wrbw9\" (UID: \"90ab0297-0497-4e43-9006-2a17f86d78c6\") " pod="kube-system/coredns-7d764666f9-wrbw9" Mar 6 01:43:09.534328 kubelet[2575]: I0306 01:43:09.534024 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x85kh\" (UniqueName: \"kubernetes.io/projected/ed20f622-f835-462f-a791-b755c458eb23-kube-api-access-x85kh\") pod \"whisker-745bcc5478-hcnks\" (UID: \"ed20f622-f835-462f-a791-b755c458eb23\") " pod="calico-system/whisker-745bcc5478-hcnks" Mar 6 01:43:09.534328 kubelet[2575]: I0306 01:43:09.534052 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flkqx\" (UniqueName: \"kubernetes.io/projected/57e3989c-45dd-43eb-a32d-f906d625c6d0-kube-api-access-flkqx\") pod \"calico-apiserver-769b589d67-kcwqc\" (UID: \"57e3989c-45dd-43eb-a32d-f906d625c6d0\") " pod="calico-system/calico-apiserver-769b589d67-kcwqc" Mar 6 01:43:09.534328 kubelet[2575]: I0306 01:43:09.534129 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxxsp\" (UniqueName: \"kubernetes.io/projected/763ef55a-80e2-49cd-8289-67ad48649a32-kube-api-access-jxxsp\") pod \"goldmane-9f7667bb8-5c9lf\" (UID: \"763ef55a-80e2-49cd-8289-67ad48649a32\") " pod="calico-system/goldmane-9f7667bb8-5c9lf" Mar 6 01:43:09.534328 kubelet[2575]: I0306 01:43:09.534160 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cbjm\" (UniqueName: \"kubernetes.io/projected/ae33386a-6914-457b-a4b5-24f516f2dd38-kube-api-access-4cbjm\") pod \"calico-kube-controllers-5cfb8f57f7-pz9ts\" (UID: \"ae33386a-6914-457b-a4b5-24f516f2dd38\") " pod="calico-system/calico-kube-controllers-5cfb8f57f7-pz9ts" Mar 6 01:43:09.534613 kubelet[2575]: I0306 01:43:09.534185 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/ed20f622-f835-462f-a791-b755c458eb23-nginx-config\") pod \"whisker-745bcc5478-hcnks\" (UID: \"ed20f622-f835-462f-a791-b755c458eb23\") " pod="calico-system/whisker-745bcc5478-hcnks" Mar 6 01:43:09.534613 kubelet[2575]: I0306 01:43:09.534244 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fed33a75-c212-47b5-8552-87790ea76352-calico-apiserver-certs\") pod \"calico-apiserver-769b589d67-dgxvb\" (UID: \"fed33a75-c212-47b5-8552-87790ea76352\") " pod="calico-system/calico-apiserver-769b589d67-dgxvb" Mar 6 01:43:09.534613 kubelet[2575]: I0306 01:43:09.534285 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90fa09d7-47f2-40ee-97f2-b3546e0b3fa4-config-volume\") pod \"coredns-7d764666f9-pnmw7\" (UID: \"90fa09d7-47f2-40ee-97f2-b3546e0b3fa4\") " pod="kube-system/coredns-7d764666f9-pnmw7" Mar 6 01:43:09.534613 kubelet[2575]: I0306 01:43:09.534309 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/763ef55a-80e2-49cd-8289-67ad48649a32-goldmane-ca-bundle\") pod \"goldmane-9f7667bb8-5c9lf\" (UID: \"763ef55a-80e2-49cd-8289-67ad48649a32\") " pod="calico-system/goldmane-9f7667bb8-5c9lf" Mar 6 01:43:09.534613 kubelet[2575]: I0306 01:43:09.534332 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ed20f622-f835-462f-a791-b755c458eb23-whisker-backend-key-pair\") pod \"whisker-745bcc5478-hcnks\" (UID: \"ed20f622-f835-462f-a791-b755c458eb23\") " pod="calico-system/whisker-745bcc5478-hcnks" Mar 6 01:43:09.534836 kubelet[2575]: I0306 01:43:09.534351 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed20f622-f835-462f-a791-b755c458eb23-whisker-ca-bundle\") pod \"whisker-745bcc5478-hcnks\" (UID: \"ed20f622-f835-462f-a791-b755c458eb23\") " pod="calico-system/whisker-745bcc5478-hcnks" Mar 6 01:43:09.534836 kubelet[2575]: I0306 01:43:09.534415 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xbp5\" (UniqueName: \"kubernetes.io/projected/90fa09d7-47f2-40ee-97f2-b3546e0b3fa4-kube-api-access-9xbp5\") pod \"coredns-7d764666f9-pnmw7\" (UID: \"90fa09d7-47f2-40ee-97f2-b3546e0b3fa4\") " pod="kube-system/coredns-7d764666f9-pnmw7" Mar 6 01:43:09.534836 kubelet[2575]: I0306 01:43:09.534466 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/763ef55a-80e2-49cd-8289-67ad48649a32-goldmane-key-pair\") pod \"goldmane-9f7667bb8-5c9lf\" (UID: \"763ef55a-80e2-49cd-8289-67ad48649a32\") " pod="calico-system/goldmane-9f7667bb8-5c9lf" Mar 6 01:43:09.534836 kubelet[2575]: I0306 01:43:09.534515 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/57e3989c-45dd-43eb-a32d-f906d625c6d0-calico-apiserver-certs\") pod \"calico-apiserver-769b589d67-kcwqc\" (UID: \"57e3989c-45dd-43eb-a32d-f906d625c6d0\") " pod="calico-system/calico-apiserver-769b589d67-kcwqc" Mar 6 01:43:09.534836 kubelet[2575]: I0306 01:43:09.534610 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae33386a-6914-457b-a4b5-24f516f2dd38-tigera-ca-bundle\") pod \"calico-kube-controllers-5cfb8f57f7-pz9ts\" (UID: \"ae33386a-6914-457b-a4b5-24f516f2dd38\") " pod="calico-system/calico-kube-controllers-5cfb8f57f7-pz9ts" Mar 6 01:43:09.535037 kubelet[2575]: I0306 01:43:09.534634 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7bpk\" (UniqueName: \"kubernetes.io/projected/fed33a75-c212-47b5-8552-87790ea76352-kube-api-access-j7bpk\") pod \"calico-apiserver-769b589d67-dgxvb\" (UID: \"fed33a75-c212-47b5-8552-87790ea76352\") " pod="calico-system/calico-apiserver-769b589d67-dgxvb" Mar 6 01:43:09.568652 containerd[1476]: time="2026-03-06T01:43:09.568503124Z" level=info msg="CreateContainer within sandbox \"56d134d04bcd7e6d19eacb72d390cac1c7462f36383f1c922d68c24a6b9f17b9\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 6 01:43:09.593454 containerd[1476]: time="2026-03-06T01:43:09.593341266Z" level=info msg="CreateContainer within sandbox \"56d134d04bcd7e6d19eacb72d390cac1c7462f36383f1c922d68c24a6b9f17b9\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"86a1042dca1558125bec0f11aef8bd7e94e2973be95c96f6309b86f6709fa9fd\"" Mar 6 01:43:09.594293 containerd[1476]: time="2026-03-06T01:43:09.594178588Z" level=info msg="StartContainer for \"86a1042dca1558125bec0f11aef8bd7e94e2973be95c96f6309b86f6709fa9fd\"" Mar 6 01:43:09.632752 systemd[1]: Started cri-containerd-86a1042dca1558125bec0f11aef8bd7e94e2973be95c96f6309b86f6709fa9fd.scope - libcontainer container 86a1042dca1558125bec0f11aef8bd7e94e2973be95c96f6309b86f6709fa9fd. Mar 6 01:43:09.699658 containerd[1476]: time="2026-03-06T01:43:09.699401482Z" level=info msg="StartContainer for \"86a1042dca1558125bec0f11aef8bd7e94e2973be95c96f6309b86f6709fa9fd\" returns successfully" Mar 6 01:43:09.717184 containerd[1476]: time="2026-03-06T01:43:09.717064245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-769b589d67-dgxvb,Uid:fed33a75-c212-47b5-8552-87790ea76352,Namespace:calico-system,Attempt:0,}" Mar 6 01:43:09.740270 containerd[1476]: time="2026-03-06T01:43:09.740184985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-769b589d67-kcwqc,Uid:57e3989c-45dd-43eb-a32d-f906d625c6d0,Namespace:calico-system,Attempt:0,}" Mar 6 01:43:09.757517 containerd[1476]: time="2026-03-06T01:43:09.757346704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-5c9lf,Uid:763ef55a-80e2-49cd-8289-67ad48649a32,Namespace:calico-system,Attempt:0,}" Mar 6 01:43:09.773312 kubelet[2575]: E0306 01:43:09.773197 2575 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:43:09.774065 containerd[1476]: time="2026-03-06T01:43:09.773811110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-pnmw7,Uid:90fa09d7-47f2-40ee-97f2-b3546e0b3fa4,Namespace:kube-system,Attempt:0,}" Mar 6 01:43:09.790170 containerd[1476]: time="2026-03-06T01:43:09.789804169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cfb8f57f7-pz9ts,Uid:ae33386a-6914-457b-a4b5-24f516f2dd38,Namespace:calico-system,Attempt:0,}" Mar 6 01:43:09.792816 containerd[1476]: time="2026-03-06T01:43:09.792768047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-745bcc5478-hcnks,Uid:ed20f622-f835-462f-a791-b755c458eb23,Namespace:calico-system,Attempt:0,}" Mar 6 01:43:09.806918 kubelet[2575]: E0306 01:43:09.806433 2575 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:43:09.808098 containerd[1476]: time="2026-03-06T01:43:09.808068846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-wrbw9,Uid:90ab0297-0497-4e43-9006-2a17f86d78c6,Namespace:kube-system,Attempt:0,}" Mar 6 01:43:10.308609 containerd[1476]: 2026-03-06 01:43:10.123 [INFO][3705] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1a80a81e56d407cb8c0bcb2d9f3116b82a20c286e8410a4728df42a67c541b21" Mar 6 01:43:10.308609 containerd[1476]: 2026-03-06 01:43:10.124 [INFO][3705] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1a80a81e56d407cb8c0bcb2d9f3116b82a20c286e8410a4728df42a67c541b21" iface="eth0" netns="/var/run/netns/cni-37fe89c2-036d-4114-32db-8431e3e4fca6" Mar 6 01:43:10.308609 containerd[1476]: 2026-03-06 01:43:10.124 [INFO][3705] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1a80a81e56d407cb8c0bcb2d9f3116b82a20c286e8410a4728df42a67c541b21" iface="eth0" netns="/var/run/netns/cni-37fe89c2-036d-4114-32db-8431e3e4fca6" Mar 6 01:43:10.308609 containerd[1476]: 2026-03-06 01:43:10.125 [INFO][3705] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1a80a81e56d407cb8c0bcb2d9f3116b82a20c286e8410a4728df42a67c541b21" iface="eth0" netns="/var/run/netns/cni-37fe89c2-036d-4114-32db-8431e3e4fca6" Mar 6 01:43:10.308609 containerd[1476]: 2026-03-06 01:43:10.125 [INFO][3705] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1a80a81e56d407cb8c0bcb2d9f3116b82a20c286e8410a4728df42a67c541b21" Mar 6 01:43:10.308609 containerd[1476]: 2026-03-06 01:43:10.125 [INFO][3705] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1a80a81e56d407cb8c0bcb2d9f3116b82a20c286e8410a4728df42a67c541b21" Mar 6 01:43:10.308609 containerd[1476]: 2026-03-06 01:43:10.244 [INFO][3757] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1a80a81e56d407cb8c0bcb2d9f3116b82a20c286e8410a4728df42a67c541b21" HandleID="k8s-pod-network.1a80a81e56d407cb8c0bcb2d9f3116b82a20c286e8410a4728df42a67c541b21" Workload="localhost-k8s-coredns--7d764666f9--wrbw9-eth0" Mar 6 01:43:10.308609 containerd[1476]: 2026-03-06 01:43:10.244 [INFO][3757] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:43:10.308609 containerd[1476]: 2026-03-06 01:43:10.292 [INFO][3757] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:43:10.308609 containerd[1476]: 2026-03-06 01:43:10.297 [WARNING][3757] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1a80a81e56d407cb8c0bcb2d9f3116b82a20c286e8410a4728df42a67c541b21" HandleID="k8s-pod-network.1a80a81e56d407cb8c0bcb2d9f3116b82a20c286e8410a4728df42a67c541b21" Workload="localhost-k8s-coredns--7d764666f9--wrbw9-eth0" Mar 6 01:43:10.308609 containerd[1476]: 2026-03-06 01:43:10.297 [INFO][3757] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1a80a81e56d407cb8c0bcb2d9f3116b82a20c286e8410a4728df42a67c541b21" HandleID="k8s-pod-network.1a80a81e56d407cb8c0bcb2d9f3116b82a20c286e8410a4728df42a67c541b21" Workload="localhost-k8s-coredns--7d764666f9--wrbw9-eth0" Mar 6 01:43:10.308609 containerd[1476]: 2026-03-06 01:43:10.298 [INFO][3757] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:43:10.308609 containerd[1476]: 2026-03-06 01:43:10.301 [INFO][3705] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1a80a81e56d407cb8c0bcb2d9f3116b82a20c286e8410a4728df42a67c541b21" Mar 6 01:43:10.316499 systemd-networkd[1413]: cali6886e4bfcc6: Link UP Mar 6 01:43:10.319198 systemd-networkd[1413]: cali6886e4bfcc6: Gained carrier Mar 6 01:43:10.319620 containerd[1476]: time="2026-03-06T01:43:10.319573691Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-wrbw9,Uid:90ab0297-0497-4e43-9006-2a17f86d78c6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1a80a81e56d407cb8c0bcb2d9f3116b82a20c286e8410a4728df42a67c541b21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:43:10.325836 containerd[1476]: 2026-03-06 01:43:10.161 [INFO][3706] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2cba560c259380c819e625b1d1e855e1a41360d7365af9e4cbfa9a27b13eb6ef" Mar 6 01:43:10.325836 containerd[1476]: 2026-03-06 01:43:10.164 [INFO][3706] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2cba560c259380c819e625b1d1e855e1a41360d7365af9e4cbfa9a27b13eb6ef" iface="eth0" netns="/var/run/netns/cni-c006b006-b99d-6ed2-1e3b-46e16507b34b" Mar 6 01:43:10.325836 containerd[1476]: 2026-03-06 01:43:10.164 [INFO][3706] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2cba560c259380c819e625b1d1e855e1a41360d7365af9e4cbfa9a27b13eb6ef" iface="eth0" netns="/var/run/netns/cni-c006b006-b99d-6ed2-1e3b-46e16507b34b" Mar 6 01:43:10.325836 containerd[1476]: 2026-03-06 01:43:10.165 [INFO][3706] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2cba560c259380c819e625b1d1e855e1a41360d7365af9e4cbfa9a27b13eb6ef" iface="eth0" netns="/var/run/netns/cni-c006b006-b99d-6ed2-1e3b-46e16507b34b" Mar 6 01:43:10.325836 containerd[1476]: 2026-03-06 01:43:10.165 [INFO][3706] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2cba560c259380c819e625b1d1e855e1a41360d7365af9e4cbfa9a27b13eb6ef" Mar 6 01:43:10.325836 containerd[1476]: 2026-03-06 01:43:10.165 [INFO][3706] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2cba560c259380c819e625b1d1e855e1a41360d7365af9e4cbfa9a27b13eb6ef" Mar 6 01:43:10.325836 containerd[1476]: 2026-03-06 01:43:10.244 [INFO][3768] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2cba560c259380c819e625b1d1e855e1a41360d7365af9e4cbfa9a27b13eb6ef" HandleID="k8s-pod-network.2cba560c259380c819e625b1d1e855e1a41360d7365af9e4cbfa9a27b13eb6ef" Workload="localhost-k8s-goldmane--9f7667bb8--5c9lf-eth0" Mar 6 01:43:10.325836 containerd[1476]: 2026-03-06 01:43:10.246 [INFO][3768] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:43:10.325836 containerd[1476]: 2026-03-06 01:43:10.299 [INFO][3768] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:43:10.325836 containerd[1476]: 2026-03-06 01:43:10.305 [WARNING][3768] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2cba560c259380c819e625b1d1e855e1a41360d7365af9e4cbfa9a27b13eb6ef" HandleID="k8s-pod-network.2cba560c259380c819e625b1d1e855e1a41360d7365af9e4cbfa9a27b13eb6ef" Workload="localhost-k8s-goldmane--9f7667bb8--5c9lf-eth0" Mar 6 01:43:10.325836 containerd[1476]: 2026-03-06 01:43:10.305 [INFO][3768] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2cba560c259380c819e625b1d1e855e1a41360d7365af9e4cbfa9a27b13eb6ef" HandleID="k8s-pod-network.2cba560c259380c819e625b1d1e855e1a41360d7365af9e4cbfa9a27b13eb6ef" Workload="localhost-k8s-goldmane--9f7667bb8--5c9lf-eth0" Mar 6 01:43:10.325836 containerd[1476]: 2026-03-06 01:43:10.309 [INFO][3768] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:43:10.325836 containerd[1476]: 2026-03-06 01:43:10.318 [INFO][3706] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2cba560c259380c819e625b1d1e855e1a41360d7365af9e4cbfa9a27b13eb6ef" Mar 6 01:43:10.333664 containerd[1476]: time="2026-03-06T01:43:10.332036897Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-5c9lf,Uid:763ef55a-80e2-49cd-8289-67ad48649a32,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2cba560c259380c819e625b1d1e855e1a41360d7365af9e4cbfa9a27b13eb6ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:43:10.338953 kubelet[2575]: E0306 01:43:10.338919 2575 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2cba560c259380c819e625b1d1e855e1a41360d7365af9e4cbfa9a27b13eb6ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:43:10.340124 kubelet[2575]: E0306 01:43:10.339710 2575 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2cba560c259380c819e625b1d1e855e1a41360d7365af9e4cbfa9a27b13eb6ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-5c9lf" Mar 6 01:43:10.340124 kubelet[2575]: E0306 01:43:10.339737 2575 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2cba560c259380c819e625b1d1e855e1a41360d7365af9e4cbfa9a27b13eb6ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-5c9lf" Mar 6 01:43:10.340124 kubelet[2575]: E0306 01:43:10.339838 2575 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-9f7667bb8-5c9lf_calico-system(763ef55a-80e2-49cd-8289-67ad48649a32)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-9f7667bb8-5c9lf_calico-system(763ef55a-80e2-49cd-8289-67ad48649a32)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2cba560c259380c819e625b1d1e855e1a41360d7365af9e4cbfa9a27b13eb6ef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-5c9lf" podUID="763ef55a-80e2-49cd-8289-67ad48649a32" Mar 6 01:43:10.340596 kubelet[2575]: E0306 01:43:10.338817 2575 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a80a81e56d407cb8c0bcb2d9f3116b82a20c286e8410a4728df42a67c541b21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:43:10.340596 kubelet[2575]: E0306 01:43:10.340048 2575 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a80a81e56d407cb8c0bcb2d9f3116b82a20c286e8410a4728df42a67c541b21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-wrbw9" Mar 6 01:43:10.340596 kubelet[2575]: E0306 01:43:10.340286 2575 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a80a81e56d407cb8c0bcb2d9f3116b82a20c286e8410a4728df42a67c541b21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-wrbw9" Mar 6 01:43:10.340676 kubelet[2575]: E0306 01:43:10.340325 2575 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-wrbw9_kube-system(90ab0297-0497-4e43-9006-2a17f86d78c6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-wrbw9_kube-system(90ab0297-0497-4e43-9006-2a17f86d78c6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1a80a81e56d407cb8c0bcb2d9f3116b82a20c286e8410a4728df42a67c541b21\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-wrbw9" podUID="90ab0297-0497-4e43-9006-2a17f86d78c6" Mar 6 01:43:10.351670 containerd[1476]: 2026-03-06 01:43:10.177 [INFO][3707] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ab059d60528a8be8e6ec8f05f7216b35304187d0b5f0200e64b087edee027c98" Mar 6 01:43:10.351670 containerd[1476]: 2026-03-06 01:43:10.178 [INFO][3707] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ab059d60528a8be8e6ec8f05f7216b35304187d0b5f0200e64b087edee027c98" iface="eth0" netns="/var/run/netns/cni-67d2d2fe-4088-71fc-deff-3aebd96b039f" Mar 6 01:43:10.351670 containerd[1476]: 2026-03-06 01:43:10.178 [INFO][3707] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ab059d60528a8be8e6ec8f05f7216b35304187d0b5f0200e64b087edee027c98" iface="eth0" netns="/var/run/netns/cni-67d2d2fe-4088-71fc-deff-3aebd96b039f" Mar 6 01:43:10.351670 containerd[1476]: 2026-03-06 01:43:10.178 [INFO][3707] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ab059d60528a8be8e6ec8f05f7216b35304187d0b5f0200e64b087edee027c98" iface="eth0" netns="/var/run/netns/cni-67d2d2fe-4088-71fc-deff-3aebd96b039f" Mar 6 01:43:10.351670 containerd[1476]: 2026-03-06 01:43:10.178 [INFO][3707] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ab059d60528a8be8e6ec8f05f7216b35304187d0b5f0200e64b087edee027c98" Mar 6 01:43:10.351670 containerd[1476]: 2026-03-06 01:43:10.178 [INFO][3707] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ab059d60528a8be8e6ec8f05f7216b35304187d0b5f0200e64b087edee027c98" Mar 6 01:43:10.351670 containerd[1476]: 2026-03-06 01:43:10.256 [INFO][3775] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ab059d60528a8be8e6ec8f05f7216b35304187d0b5f0200e64b087edee027c98" HandleID="k8s-pod-network.ab059d60528a8be8e6ec8f05f7216b35304187d0b5f0200e64b087edee027c98" Workload="localhost-k8s-calico--apiserver--769b589d67--kcwqc-eth0" Mar 6 01:43:10.351670 containerd[1476]: 2026-03-06 01:43:10.256 [INFO][3775] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:43:10.351670 containerd[1476]: 2026-03-06 01:43:10.312 [INFO][3775] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:43:10.351670 containerd[1476]: 2026-03-06 01:43:10.321 [WARNING][3775] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ab059d60528a8be8e6ec8f05f7216b35304187d0b5f0200e64b087edee027c98" HandleID="k8s-pod-network.ab059d60528a8be8e6ec8f05f7216b35304187d0b5f0200e64b087edee027c98" Workload="localhost-k8s-calico--apiserver--769b589d67--kcwqc-eth0" Mar 6 01:43:10.351670 containerd[1476]: 2026-03-06 01:43:10.323 [INFO][3775] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ab059d60528a8be8e6ec8f05f7216b35304187d0b5f0200e64b087edee027c98" HandleID="k8s-pod-network.ab059d60528a8be8e6ec8f05f7216b35304187d0b5f0200e64b087edee027c98" Workload="localhost-k8s-calico--apiserver--769b589d67--kcwqc-eth0" Mar 6 01:43:10.351670 containerd[1476]: 2026-03-06 01:43:10.329 [INFO][3775] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:43:10.351670 containerd[1476]: 2026-03-06 01:43:10.338 [INFO][3707] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ab059d60528a8be8e6ec8f05f7216b35304187d0b5f0200e64b087edee027c98" Mar 6 01:43:10.354668 containerd[1476]: 2026-03-06 01:43:09.990 [ERROR][3613] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 6 01:43:10.354668 containerd[1476]: 2026-03-06 01:43:10.065 [INFO][3613] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5cfb8f57f7--pz9ts-eth0 calico-kube-controllers-5cfb8f57f7- calico-system ae33386a-6914-457b-a4b5-24f516f2dd38 914 0 2026-03-06 01:42:55 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5cfb8f57f7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5cfb8f57f7-pz9ts eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6886e4bfcc6 [] [] }} ContainerID="717128c158d2f0a8808a8b70f6837a0626c525e3d81576300f9d1cc1d863b884" Namespace="calico-system" Pod="calico-kube-controllers-5cfb8f57f7-pz9ts" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5cfb8f57f7--pz9ts-" Mar 6 01:43:10.354668 containerd[1476]: 2026-03-06 01:43:10.066 [INFO][3613] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="717128c158d2f0a8808a8b70f6837a0626c525e3d81576300f9d1cc1d863b884" Namespace="calico-system" Pod="calico-kube-controllers-5cfb8f57f7-pz9ts" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5cfb8f57f7--pz9ts-eth0" Mar 6 01:43:10.354668 containerd[1476]: 2026-03-06 01:43:10.175 [INFO][3743] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="717128c158d2f0a8808a8b70f6837a0626c525e3d81576300f9d1cc1d863b884" HandleID="k8s-pod-network.717128c158d2f0a8808a8b70f6837a0626c525e3d81576300f9d1cc1d863b884" Workload="localhost-k8s-calico--kube--controllers--5cfb8f57f7--pz9ts-eth0" Mar 6 01:43:10.354668 containerd[1476]: 2026-03-06 01:43:10.190 [INFO][3743] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="717128c158d2f0a8808a8b70f6837a0626c525e3d81576300f9d1cc1d863b884" HandleID="k8s-pod-network.717128c158d2f0a8808a8b70f6837a0626c525e3d81576300f9d1cc1d863b884" Workload="localhost-k8s-calico--kube--controllers--5cfb8f57f7--pz9ts-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00037c140), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5cfb8f57f7-pz9ts", "timestamp":"2026-03-06 01:43:10.175928828 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003fa000)} Mar 6 01:43:10.354668 containerd[1476]: 2026-03-06 01:43:10.191 [INFO][3743] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:43:10.354668 containerd[1476]: 2026-03-06 01:43:10.191 [INFO][3743] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:43:10.354668 containerd[1476]: 2026-03-06 01:43:10.191 [INFO][3743] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 6 01:43:10.354668 containerd[1476]: 2026-03-06 01:43:10.198 [INFO][3743] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.717128c158d2f0a8808a8b70f6837a0626c525e3d81576300f9d1cc1d863b884" host="localhost" Mar 6 01:43:10.354668 containerd[1476]: 2026-03-06 01:43:10.210 [INFO][3743] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 6 01:43:10.354668 containerd[1476]: 2026-03-06 01:43:10.241 [INFO][3743] ipam/ipam.go 558: Ran out of existing affine blocks for host host="localhost" Mar 6 01:43:10.354668 containerd[1476]: 2026-03-06 01:43:10.244 [INFO][3743] ipam/ipam.go 575: Tried all affine blocks. Looking for an affine block with space, or a new unclaimed block host="localhost" Mar 6 01:43:10.354668 containerd[1476]: 2026-03-06 01:43:10.247 [INFO][3743] ipam/ipam_block_reader_writer.go 158: Found free block: 192.168.88.128/26 Mar 6 01:43:10.354668 containerd[1476]: 2026-03-06 01:43:10.247 [INFO][3743] ipam/ipam.go 588: Found unclaimed block in 3.067547ms host="localhost" subnet=192.168.88.128/26 Mar 6 01:43:10.354668 containerd[1476]: 2026-03-06 01:43:10.247 [INFO][3743] ipam/ipam_block_reader_writer.go 175: Trying to create affinity in pending state host="localhost" subnet=192.168.88.128/26 Mar 6 01:43:10.354668 containerd[1476]: 2026-03-06 01:43:10.252 [INFO][3743] ipam/ipam_block_reader_writer.go 205: Successfully created pending affinity for block host="localhost" subnet=192.168.88.128/26 Mar 6 01:43:10.354668 containerd[1476]: 2026-03-06 01:43:10.252 [INFO][3743] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 6 01:43:10.354668 containerd[1476]: 2026-03-06 01:43:10.254 [INFO][3743] ipam/ipam.go 165: The referenced block doesn't exist, trying to create it cidr=192.168.88.128/26 host="localhost" Mar 6 01:43:10.354668 containerd[1476]: 2026-03-06 01:43:10.257 [INFO][3743] ipam/ipam.go 172: Wrote affinity as pending cidr=192.168.88.128/26 host="localhost" Mar 6 01:43:10.354668 containerd[1476]: 2026-03-06 01:43:10.270 [INFO][3743] ipam/ipam.go 181: Attempting to claim the block cidr=192.168.88.128/26 host="localhost" Mar 6 01:43:10.354668 containerd[1476]: 2026-03-06 01:43:10.270 [INFO][3743] ipam/ipam_block_reader_writer.go 226: Attempting to create a new block affinityType="host" host="localhost" subnet=192.168.88.128/26 Mar 6 01:43:10.354668 containerd[1476]: 2026-03-06 01:43:10.277 [INFO][3743] ipam/ipam_block_reader_writer.go 267: Successfully created block Mar 6 01:43:10.354668 containerd[1476]: 2026-03-06 01:43:10.277 [INFO][3743] ipam/ipam_block_reader_writer.go 283: Confirming affinity host="localhost" subnet=192.168.88.128/26 Mar 6 01:43:10.354668 containerd[1476]: 2026-03-06 01:43:10.281 [INFO][3743] ipam/ipam_block_reader_writer.go 298: Successfully confirmed affinity host="localhost" subnet=192.168.88.128/26 Mar 6 01:43:10.354668 containerd[1476]: 2026-03-06 01:43:10.281 [INFO][3743] ipam/ipam.go 623: Block '192.168.88.128/26' has 64 free ips which is more than 1 ips required. host="localhost" subnet=192.168.88.128/26 Mar 6 01:43:10.354668 containerd[1476]: 2026-03-06 01:43:10.281 [INFO][3743] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.717128c158d2f0a8808a8b70f6837a0626c525e3d81576300f9d1cc1d863b884" host="localhost" Mar 6 01:43:10.354668 containerd[1476]: 2026-03-06 01:43:10.283 [INFO][3743] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.717128c158d2f0a8808a8b70f6837a0626c525e3d81576300f9d1cc1d863b884 Mar 6 01:43:10.354668 containerd[1476]: 2026-03-06 01:43:10.286 [INFO][3743] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.717128c158d2f0a8808a8b70f6837a0626c525e3d81576300f9d1cc1d863b884" host="localhost" Mar 6 01:43:10.357369 containerd[1476]: 2026-03-06 01:43:10.291 [INFO][3743] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.128/26] block=192.168.88.128/26 handle="k8s-pod-network.717128c158d2f0a8808a8b70f6837a0626c525e3d81576300f9d1cc1d863b884" host="localhost" Mar 6 01:43:10.357369 containerd[1476]: 2026-03-06 01:43:10.291 [INFO][3743] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.128/26] handle="k8s-pod-network.717128c158d2f0a8808a8b70f6837a0626c525e3d81576300f9d1cc1d863b884" host="localhost" Mar 6 01:43:10.357369 containerd[1476]: 2026-03-06 01:43:10.293 [INFO][3743] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:43:10.357369 containerd[1476]: 2026-03-06 01:43:10.293 [INFO][3743] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.128/26] IPv6=[] ContainerID="717128c158d2f0a8808a8b70f6837a0626c525e3d81576300f9d1cc1d863b884" HandleID="k8s-pod-network.717128c158d2f0a8808a8b70f6837a0626c525e3d81576300f9d1cc1d863b884" Workload="localhost-k8s-calico--kube--controllers--5cfb8f57f7--pz9ts-eth0" Mar 6 01:43:10.357369 containerd[1476]: 2026-03-06 01:43:10.298 [INFO][3613] cni-plugin/k8s.go 418: Populated endpoint ContainerID="717128c158d2f0a8808a8b70f6837a0626c525e3d81576300f9d1cc1d863b884" Namespace="calico-system" Pod="calico-kube-controllers-5cfb8f57f7-pz9ts" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5cfb8f57f7--pz9ts-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5cfb8f57f7--pz9ts-eth0", GenerateName:"calico-kube-controllers-5cfb8f57f7-", Namespace:"calico-system", SelfLink:"", UID:"ae33386a-6914-457b-a4b5-24f516f2dd38", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 42, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5cfb8f57f7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5cfb8f57f7-pz9ts", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.128/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6886e4bfcc6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:43:10.357369 containerd[1476]: 2026-03-06 01:43:10.298 [INFO][3613] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.128/32] ContainerID="717128c158d2f0a8808a8b70f6837a0626c525e3d81576300f9d1cc1d863b884" Namespace="calico-system" Pod="calico-kube-controllers-5cfb8f57f7-pz9ts" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5cfb8f57f7--pz9ts-eth0" Mar 6 01:43:10.357369 containerd[1476]: 2026-03-06 01:43:10.298 [INFO][3613] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6886e4bfcc6 ContainerID="717128c158d2f0a8808a8b70f6837a0626c525e3d81576300f9d1cc1d863b884" Namespace="calico-system" Pod="calico-kube-controllers-5cfb8f57f7-pz9ts" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5cfb8f57f7--pz9ts-eth0" Mar 6 01:43:10.357369 containerd[1476]: 2026-03-06 01:43:10.321 [INFO][3613] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="717128c158d2f0a8808a8b70f6837a0626c525e3d81576300f9d1cc1d863b884" Namespace="calico-system" Pod="calico-kube-controllers-5cfb8f57f7-pz9ts" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5cfb8f57f7--pz9ts-eth0" Mar 6 01:43:10.357369 containerd[1476]: 2026-03-06 01:43:10.322 [INFO][3613] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="717128c158d2f0a8808a8b70f6837a0626c525e3d81576300f9d1cc1d863b884" Namespace="calico-system" Pod="calico-kube-controllers-5cfb8f57f7-pz9ts" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5cfb8f57f7--pz9ts-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5cfb8f57f7--pz9ts-eth0", GenerateName:"calico-kube-controllers-5cfb8f57f7-", Namespace:"calico-system", SelfLink:"", UID:"ae33386a-6914-457b-a4b5-24f516f2dd38", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 42, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5cfb8f57f7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"717128c158d2f0a8808a8b70f6837a0626c525e3d81576300f9d1cc1d863b884", Pod:"calico-kube-controllers-5cfb8f57f7-pz9ts", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.128/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6886e4bfcc6", MAC:"b6:e6:59:62:ef:aa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:43:10.357369 containerd[1476]: 2026-03-06 01:43:10.346 [INFO][3613] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="717128c158d2f0a8808a8b70f6837a0626c525e3d81576300f9d1cc1d863b884" Namespace="calico-system" Pod="calico-kube-controllers-5cfb8f57f7-pz9ts" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5cfb8f57f7--pz9ts-eth0" Mar 6 01:43:10.359054 containerd[1476]: time="2026-03-06T01:43:10.358091219Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-769b589d67-kcwqc,Uid:57e3989c-45dd-43eb-a32d-f906d625c6d0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ab059d60528a8be8e6ec8f05f7216b35304187d0b5f0200e64b087edee027c98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:43:10.359176 kubelet[2575]: E0306 01:43:10.358419 2575 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab059d60528a8be8e6ec8f05f7216b35304187d0b5f0200e64b087edee027c98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:43:10.359176 kubelet[2575]: E0306 01:43:10.358465 2575 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab059d60528a8be8e6ec8f05f7216b35304187d0b5f0200e64b087edee027c98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-769b589d67-kcwqc" Mar 6 01:43:10.359176 kubelet[2575]: E0306 01:43:10.358486 2575 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab059d60528a8be8e6ec8f05f7216b35304187d0b5f0200e64b087edee027c98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-769b589d67-kcwqc" Mar 6 01:43:10.359323 kubelet[2575]: E0306 01:43:10.358607 2575 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-769b589d67-kcwqc_calico-system(57e3989c-45dd-43eb-a32d-f906d625c6d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-769b589d67-kcwqc_calico-system(57e3989c-45dd-43eb-a32d-f906d625c6d0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ab059d60528a8be8e6ec8f05f7216b35304187d0b5f0200e64b087edee027c98\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-769b589d67-kcwqc" podUID="57e3989c-45dd-43eb-a32d-f906d625c6d0" Mar 6 01:43:10.363262 containerd[1476]: 2026-03-06 01:43:10.184 [INFO][3708] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="476c975f4ffef4d3380949ad12849163b7b8dcb2d192368f44a79b48a4879ef7" Mar 6 01:43:10.363262 containerd[1476]: 2026-03-06 01:43:10.184 [INFO][3708] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="476c975f4ffef4d3380949ad12849163b7b8dcb2d192368f44a79b48a4879ef7" iface="eth0" netns="/var/run/netns/cni-eab2ef7d-9ab7-f3d9-d728-dd44b687139d" Mar 6 01:43:10.363262 containerd[1476]: 2026-03-06 01:43:10.185 [INFO][3708] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="476c975f4ffef4d3380949ad12849163b7b8dcb2d192368f44a79b48a4879ef7" iface="eth0" netns="/var/run/netns/cni-eab2ef7d-9ab7-f3d9-d728-dd44b687139d" Mar 6 01:43:10.363262 containerd[1476]: 2026-03-06 01:43:10.185 [INFO][3708] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="476c975f4ffef4d3380949ad12849163b7b8dcb2d192368f44a79b48a4879ef7" iface="eth0" netns="/var/run/netns/cni-eab2ef7d-9ab7-f3d9-d728-dd44b687139d" Mar 6 01:43:10.363262 containerd[1476]: 2026-03-06 01:43:10.185 [INFO][3708] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="476c975f4ffef4d3380949ad12849163b7b8dcb2d192368f44a79b48a4879ef7" Mar 6 01:43:10.363262 containerd[1476]: 2026-03-06 01:43:10.186 [INFO][3708] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="476c975f4ffef4d3380949ad12849163b7b8dcb2d192368f44a79b48a4879ef7" Mar 6 01:43:10.363262 containerd[1476]: 2026-03-06 01:43:10.269 [INFO][3785] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="476c975f4ffef4d3380949ad12849163b7b8dcb2d192368f44a79b48a4879ef7" HandleID="k8s-pod-network.476c975f4ffef4d3380949ad12849163b7b8dcb2d192368f44a79b48a4879ef7" Workload="localhost-k8s-whisker--745bcc5478--hcnks-eth0" Mar 6 01:43:10.363262 containerd[1476]: 2026-03-06 01:43:10.269 [INFO][3785] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:43:10.363262 containerd[1476]: 2026-03-06 01:43:10.332 [INFO][3785] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:43:10.363262 containerd[1476]: 2026-03-06 01:43:10.347 [WARNING][3785] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="476c975f4ffef4d3380949ad12849163b7b8dcb2d192368f44a79b48a4879ef7" HandleID="k8s-pod-network.476c975f4ffef4d3380949ad12849163b7b8dcb2d192368f44a79b48a4879ef7" Workload="localhost-k8s-whisker--745bcc5478--hcnks-eth0" Mar 6 01:43:10.363262 containerd[1476]: 2026-03-06 01:43:10.347 [INFO][3785] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="476c975f4ffef4d3380949ad12849163b7b8dcb2d192368f44a79b48a4879ef7" HandleID="k8s-pod-network.476c975f4ffef4d3380949ad12849163b7b8dcb2d192368f44a79b48a4879ef7" Workload="localhost-k8s-whisker--745bcc5478--hcnks-eth0" Mar 6 01:43:10.363262 containerd[1476]: 2026-03-06 01:43:10.349 [INFO][3785] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:43:10.363262 containerd[1476]: 2026-03-06 01:43:10.357 [INFO][3708] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="476c975f4ffef4d3380949ad12849163b7b8dcb2d192368f44a79b48a4879ef7" Mar 6 01:43:10.367297 containerd[1476]: time="2026-03-06T01:43:10.367152034Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-745bcc5478-hcnks,Uid:ed20f622-f835-462f-a791-b755c458eb23,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"476c975f4ffef4d3380949ad12849163b7b8dcb2d192368f44a79b48a4879ef7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:43:10.368380 kubelet[2575]: E0306 01:43:10.368324 2575 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"476c975f4ffef4d3380949ad12849163b7b8dcb2d192368f44a79b48a4879ef7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:43:10.368472 kubelet[2575]: E0306 01:43:10.368441 2575 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"476c975f4ffef4d3380949ad12849163b7b8dcb2d192368f44a79b48a4879ef7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-745bcc5478-hcnks" Mar 6 01:43:10.374769 containerd[1476]: 2026-03-06 01:43:10.174 [INFO][3715] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4266c445018624afbe45e779532cc7cff23ac20673612743cf909edf17ac9bdc" Mar 6 01:43:10.374769 containerd[1476]: 2026-03-06 01:43:10.176 [INFO][3715] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4266c445018624afbe45e779532cc7cff23ac20673612743cf909edf17ac9bdc" iface="eth0" netns="/var/run/netns/cni-f9ba7826-7074-fb9e-8565-cdb86fbd9446" Mar 6 01:43:10.374769 containerd[1476]: 2026-03-06 01:43:10.180 [INFO][3715] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4266c445018624afbe45e779532cc7cff23ac20673612743cf909edf17ac9bdc" iface="eth0" netns="/var/run/netns/cni-f9ba7826-7074-fb9e-8565-cdb86fbd9446" Mar 6 01:43:10.374769 containerd[1476]: 2026-03-06 01:43:10.184 [INFO][3715] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4266c445018624afbe45e779532cc7cff23ac20673612743cf909edf17ac9bdc" iface="eth0" netns="/var/run/netns/cni-f9ba7826-7074-fb9e-8565-cdb86fbd9446" Mar 6 01:43:10.374769 containerd[1476]: 2026-03-06 01:43:10.185 [INFO][3715] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4266c445018624afbe45e779532cc7cff23ac20673612743cf909edf17ac9bdc" Mar 6 01:43:10.374769 containerd[1476]: 2026-03-06 01:43:10.185 [INFO][3715] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4266c445018624afbe45e779532cc7cff23ac20673612743cf909edf17ac9bdc" Mar 6 01:43:10.374769 containerd[1476]: 2026-03-06 01:43:10.275 [INFO][3778] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4266c445018624afbe45e779532cc7cff23ac20673612743cf909edf17ac9bdc" HandleID="k8s-pod-network.4266c445018624afbe45e779532cc7cff23ac20673612743cf909edf17ac9bdc" Workload="localhost-k8s-calico--apiserver--769b589d67--dgxvb-eth0" Mar 6 01:43:10.374769 containerd[1476]: 2026-03-06 01:43:10.275 [INFO][3778] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:43:10.374769 containerd[1476]: 2026-03-06 01:43:10.349 [INFO][3778] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:43:10.374769 containerd[1476]: 2026-03-06 01:43:10.358 [WARNING][3778] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4266c445018624afbe45e779532cc7cff23ac20673612743cf909edf17ac9bdc" HandleID="k8s-pod-network.4266c445018624afbe45e779532cc7cff23ac20673612743cf909edf17ac9bdc" Workload="localhost-k8s-calico--apiserver--769b589d67--dgxvb-eth0" Mar 6 01:43:10.374769 containerd[1476]: 2026-03-06 01:43:10.358 [INFO][3778] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4266c445018624afbe45e779532cc7cff23ac20673612743cf909edf17ac9bdc" HandleID="k8s-pod-network.4266c445018624afbe45e779532cc7cff23ac20673612743cf909edf17ac9bdc" Workload="localhost-k8s-calico--apiserver--769b589d67--dgxvb-eth0" Mar 6 01:43:10.374769 containerd[1476]: 2026-03-06 01:43:10.363 [INFO][3778] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:43:10.374769 containerd[1476]: 2026-03-06 01:43:10.370 [INFO][3715] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4266c445018624afbe45e779532cc7cff23ac20673612743cf909edf17ac9bdc" Mar 6 01:43:10.378151 containerd[1476]: 2026-03-06 01:43:10.225 [INFO][3734] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d0bfad0886761d31e273a25296f44b1dd2d1feae73fd12082af2b06b27b81ccd" Mar 6 01:43:10.378151 containerd[1476]: 2026-03-06 01:43:10.225 [INFO][3734] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d0bfad0886761d31e273a25296f44b1dd2d1feae73fd12082af2b06b27b81ccd" iface="eth0" netns="/var/run/netns/cni-a9fc939f-1074-9941-385a-30f1c10b24d4" Mar 6 01:43:10.378151 containerd[1476]: 2026-03-06 01:43:10.226 [INFO][3734] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d0bfad0886761d31e273a25296f44b1dd2d1feae73fd12082af2b06b27b81ccd" iface="eth0" netns="/var/run/netns/cni-a9fc939f-1074-9941-385a-30f1c10b24d4" Mar 6 01:43:10.378151 containerd[1476]: 2026-03-06 01:43:10.229 [INFO][3734] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d0bfad0886761d31e273a25296f44b1dd2d1feae73fd12082af2b06b27b81ccd" iface="eth0" netns="/var/run/netns/cni-a9fc939f-1074-9941-385a-30f1c10b24d4" Mar 6 01:43:10.378151 containerd[1476]: 2026-03-06 01:43:10.229 [INFO][3734] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d0bfad0886761d31e273a25296f44b1dd2d1feae73fd12082af2b06b27b81ccd" Mar 6 01:43:10.378151 containerd[1476]: 2026-03-06 01:43:10.229 [INFO][3734] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d0bfad0886761d31e273a25296f44b1dd2d1feae73fd12082af2b06b27b81ccd" Mar 6 01:43:10.378151 containerd[1476]: 2026-03-06 01:43:10.277 [INFO][3800] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d0bfad0886761d31e273a25296f44b1dd2d1feae73fd12082af2b06b27b81ccd" HandleID="k8s-pod-network.d0bfad0886761d31e273a25296f44b1dd2d1feae73fd12082af2b06b27b81ccd" Workload="localhost-k8s-coredns--7d764666f9--pnmw7-eth0" Mar 6 01:43:10.378151 containerd[1476]: 2026-03-06 01:43:10.277 [INFO][3800] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:43:10.378151 containerd[1476]: 2026-03-06 01:43:10.363 [INFO][3800] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:43:10.378151 containerd[1476]: 2026-03-06 01:43:10.370 [WARNING][3800] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d0bfad0886761d31e273a25296f44b1dd2d1feae73fd12082af2b06b27b81ccd" HandleID="k8s-pod-network.d0bfad0886761d31e273a25296f44b1dd2d1feae73fd12082af2b06b27b81ccd" Workload="localhost-k8s-coredns--7d764666f9--pnmw7-eth0" Mar 6 01:43:10.378151 containerd[1476]: 2026-03-06 01:43:10.370 [INFO][3800] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d0bfad0886761d31e273a25296f44b1dd2d1feae73fd12082af2b06b27b81ccd" HandleID="k8s-pod-network.d0bfad0886761d31e273a25296f44b1dd2d1feae73fd12082af2b06b27b81ccd" Workload="localhost-k8s-coredns--7d764666f9--pnmw7-eth0" Mar 6 01:43:10.378151 containerd[1476]: 2026-03-06 01:43:10.372 [INFO][3800] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:43:10.378151 containerd[1476]: 2026-03-06 01:43:10.375 [INFO][3734] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d0bfad0886761d31e273a25296f44b1dd2d1feae73fd12082af2b06b27b81ccd" Mar 6 01:43:10.378589 containerd[1476]: time="2026-03-06T01:43:10.378340684Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-769b589d67-dgxvb,Uid:fed33a75-c212-47b5-8552-87790ea76352,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4266c445018624afbe45e779532cc7cff23ac20673612743cf909edf17ac9bdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:43:10.378674 kubelet[2575]: E0306 01:43:10.378624 2575 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4266c445018624afbe45e779532cc7cff23ac20673612743cf909edf17ac9bdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:43:10.378674 kubelet[2575]: E0306 01:43:10.378665 2575 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4266c445018624afbe45e779532cc7cff23ac20673612743cf909edf17ac9bdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-769b589d67-dgxvb" Mar 6 01:43:10.378733 kubelet[2575]: E0306 01:43:10.378682 2575 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4266c445018624afbe45e779532cc7cff23ac20673612743cf909edf17ac9bdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-769b589d67-dgxvb" Mar 6 01:43:10.378753 kubelet[2575]: E0306 01:43:10.378732 2575 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-769b589d67-dgxvb_calico-system(fed33a75-c212-47b5-8552-87790ea76352)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-769b589d67-dgxvb_calico-system(fed33a75-c212-47b5-8552-87790ea76352)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4266c445018624afbe45e779532cc7cff23ac20673612743cf909edf17ac9bdc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-769b589d67-dgxvb" podUID="fed33a75-c212-47b5-8552-87790ea76352" Mar 6 01:43:10.383217 containerd[1476]: time="2026-03-06T01:43:10.383173930Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-pnmw7,Uid:90fa09d7-47f2-40ee-97f2-b3546e0b3fa4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d0bfad0886761d31e273a25296f44b1dd2d1feae73fd12082af2b06b27b81ccd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:43:10.383689 kubelet[2575]: E0306 01:43:10.383639 2575 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0bfad0886761d31e273a25296f44b1dd2d1feae73fd12082af2b06b27b81ccd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:43:10.383689 kubelet[2575]: E0306 01:43:10.383676 2575 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0bfad0886761d31e273a25296f44b1dd2d1feae73fd12082af2b06b27b81ccd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-pnmw7" Mar 6 01:43:10.383760 kubelet[2575]: E0306 01:43:10.383691 2575 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0bfad0886761d31e273a25296f44b1dd2d1feae73fd12082af2b06b27b81ccd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-pnmw7" Mar 6 01:43:10.383760 kubelet[2575]: E0306 01:43:10.383737 2575 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-pnmw7_kube-system(90fa09d7-47f2-40ee-97f2-b3546e0b3fa4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-pnmw7_kube-system(90fa09d7-47f2-40ee-97f2-b3546e0b3fa4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d0bfad0886761d31e273a25296f44b1dd2d1feae73fd12082af2b06b27b81ccd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-pnmw7" podUID="90fa09d7-47f2-40ee-97f2-b3546e0b3fa4" Mar 6 01:43:10.387335 containerd[1476]: time="2026-03-06T01:43:10.386733348Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:43:10.387335 containerd[1476]: time="2026-03-06T01:43:10.386806447Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:43:10.387335 containerd[1476]: time="2026-03-06T01:43:10.386831815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:43:10.387335 containerd[1476]: time="2026-03-06T01:43:10.386952984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:43:10.413793 systemd[1]: Started cri-containerd-717128c158d2f0a8808a8b70f6837a0626c525e3d81576300f9d1cc1d863b884.scope - libcontainer container 717128c158d2f0a8808a8b70f6837a0626c525e3d81576300f9d1cc1d863b884. Mar 6 01:43:10.434619 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 01:43:10.481507 containerd[1476]: time="2026-03-06T01:43:10.481404099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cfb8f57f7-pz9ts,Uid:ae33386a-6914-457b-a4b5-24f516f2dd38,Namespace:calico-system,Attempt:0,} returns sandbox id \"717128c158d2f0a8808a8b70f6837a0626c525e3d81576300f9d1cc1d863b884\"" Mar 6 01:43:10.484578 containerd[1476]: time="2026-03-06T01:43:10.483411349Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 6 01:43:10.565051 containerd[1476]: time="2026-03-06T01:43:10.564929326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-769b589d67-dgxvb,Uid:fed33a75-c212-47b5-8552-87790ea76352,Namespace:calico-system,Attempt:0,}" Mar 6 01:43:10.567837 kubelet[2575]: E0306 01:43:10.567817 2575 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:43:10.569689 containerd[1476]: time="2026-03-06T01:43:10.568473589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-pnmw7,Uid:90fa09d7-47f2-40ee-97f2-b3546e0b3fa4,Namespace:kube-system,Attempt:0,}" Mar 6 01:43:10.571010 containerd[1476]: time="2026-03-06T01:43:10.570963783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-769b589d67-kcwqc,Uid:57e3989c-45dd-43eb-a32d-f906d625c6d0,Namespace:calico-system,Attempt:0,}" Mar 6 01:43:10.576035 kubelet[2575]: E0306 01:43:10.576015 2575 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:43:10.578484 containerd[1476]: time="2026-03-06T01:43:10.577808160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-wrbw9,Uid:90ab0297-0497-4e43-9006-2a17f86d78c6,Namespace:kube-system,Attempt:0,}" Mar 6 01:43:10.590884 containerd[1476]: time="2026-03-06T01:43:10.590819194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-5c9lf,Uid:763ef55a-80e2-49cd-8289-67ad48649a32,Namespace:calico-system,Attempt:0,}" Mar 6 01:43:10.600043 kubelet[2575]: I0306 01:43:10.599554 2575 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-n8fv4" podStartSLOduration=3.674039261 podStartE2EDuration="17.599503373s" podCreationTimestamp="2026-03-06 01:42:53 +0000 UTC" firstStartedPulling="2026-03-06 01:42:55.628854647 +0000 UTC m=+21.165458682" lastFinishedPulling="2026-03-06 01:43:09.554318759 +0000 UTC m=+35.090922794" observedRunningTime="2026-03-06 01:43:10.584123566 +0000 UTC m=+36.120727611" watchObservedRunningTime="2026-03-06 01:43:10.599503373 +0000 UTC m=+36.136107408" Mar 6 01:43:10.749131 kubelet[2575]: I0306 01:43:10.746579 2575 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/ed20f622-f835-462f-a791-b755c458eb23-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed20f622-f835-462f-a791-b755c458eb23-whisker-ca-bundle\") pod \"ed20f622-f835-462f-a791-b755c458eb23\" (UID: \"ed20f622-f835-462f-a791-b755c458eb23\") " Mar 6 01:43:10.749131 kubelet[2575]: I0306 01:43:10.746695 2575 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/ed20f622-f835-462f-a791-b755c458eb23-nginx-config\" (UniqueName: \"kubernetes.io/configmap/ed20f622-f835-462f-a791-b755c458eb23-nginx-config\") pod \"ed20f622-f835-462f-a791-b755c458eb23\" (UID: \"ed20f622-f835-462f-a791-b755c458eb23\") " Mar 6 01:43:10.749131 kubelet[2575]: I0306 01:43:10.746718 2575 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/ed20f622-f835-462f-a791-b755c458eb23-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ed20f622-f835-462f-a791-b755c458eb23-whisker-backend-key-pair\") pod \"ed20f622-f835-462f-a791-b755c458eb23\" (UID: \"ed20f622-f835-462f-a791-b755c458eb23\") " Mar 6 01:43:10.749131 kubelet[2575]: I0306 01:43:10.746739 2575 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/ed20f622-f835-462f-a791-b755c458eb23-kube-api-access-x85kh\" (UniqueName: \"kubernetes.io/projected/ed20f622-f835-462f-a791-b755c458eb23-kube-api-access-x85kh\") pod \"ed20f622-f835-462f-a791-b755c458eb23\" (UID: \"ed20f622-f835-462f-a791-b755c458eb23\") " Mar 6 01:43:10.749131 kubelet[2575]: I0306 01:43:10.747441 2575 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed20f622-f835-462f-a791-b755c458eb23-whisker-ca-bundle" pod "ed20f622-f835-462f-a791-b755c458eb23" (UID: "ed20f622-f835-462f-a791-b755c458eb23"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 6 01:43:10.749726 kubelet[2575]: I0306 01:43:10.749705 2575 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed20f622-f835-462f-a791-b755c458eb23-nginx-config" pod "ed20f622-f835-462f-a791-b755c458eb23" (UID: "ed20f622-f835-462f-a791-b755c458eb23"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 6 01:43:10.754052 kubelet[2575]: I0306 01:43:10.754029 2575 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed20f622-f835-462f-a791-b755c458eb23-whisker-backend-key-pair" pod "ed20f622-f835-462f-a791-b755c458eb23" (UID: "ed20f622-f835-462f-a791-b755c458eb23"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 6 01:43:10.755061 kubelet[2575]: I0306 01:43:10.754648 2575 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed20f622-f835-462f-a791-b755c458eb23-kube-api-access-x85kh" pod "ed20f622-f835-462f-a791-b755c458eb23" (UID: "ed20f622-f835-462f-a791-b755c458eb23"). InnerVolumeSpecName "kube-api-access-x85kh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 6 01:43:10.817700 systemd-networkd[1413]: calicc303200f50: Link UP Mar 6 01:43:10.817997 systemd-networkd[1413]: calicc303200f50: Gained carrier Mar 6 01:43:10.834388 containerd[1476]: 2026-03-06 01:43:10.688 [ERROR][3928] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 6 01:43:10.834388 containerd[1476]: 2026-03-06 01:43:10.707 [INFO][3928] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7d764666f9--pnmw7-eth0 coredns-7d764666f9- kube-system 90fa09d7-47f2-40ee-97f2-b3546e0b3fa4 942 0 2026-03-06 01:42:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7d764666f9-pnmw7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calicc303200f50 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="b3805e073ec3f3304fff0c8c72921612bae6a79d865e5426441bd81c0a1a597e" Namespace="kube-system" Pod="coredns-7d764666f9-pnmw7" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--pnmw7-" Mar 6 01:43:10.834388 containerd[1476]: 2026-03-06 01:43:10.707 [INFO][3928] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b3805e073ec3f3304fff0c8c72921612bae6a79d865e5426441bd81c0a1a597e" Namespace="kube-system" Pod="coredns-7d764666f9-pnmw7" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--pnmw7-eth0" Mar 6 01:43:10.834388 containerd[1476]: 2026-03-06 01:43:10.755 [INFO][3968] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b3805e073ec3f3304fff0c8c72921612bae6a79d865e5426441bd81c0a1a597e" HandleID="k8s-pod-network.b3805e073ec3f3304fff0c8c72921612bae6a79d865e5426441bd81c0a1a597e" Workload="localhost-k8s-coredns--7d764666f9--pnmw7-eth0" Mar 6 01:43:10.834388 containerd[1476]: 2026-03-06 01:43:10.769 [INFO][3968] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b3805e073ec3f3304fff0c8c72921612bae6a79d865e5426441bd81c0a1a597e" HandleID="k8s-pod-network.b3805e073ec3f3304fff0c8c72921612bae6a79d865e5426441bd81c0a1a597e" Workload="localhost-k8s-coredns--7d764666f9--pnmw7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000512f20), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7d764666f9-pnmw7", "timestamp":"2026-03-06 01:43:10.755598943 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001b86e0)} Mar 6 01:43:10.834388 containerd[1476]: 2026-03-06 01:43:10.769 [INFO][3968] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:43:10.834388 containerd[1476]: 2026-03-06 01:43:10.769 [INFO][3968] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:43:10.834388 containerd[1476]: 2026-03-06 01:43:10.769 [INFO][3968] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 6 01:43:10.834388 containerd[1476]: 2026-03-06 01:43:10.772 [INFO][3968] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b3805e073ec3f3304fff0c8c72921612bae6a79d865e5426441bd81c0a1a597e" host="localhost" Mar 6 01:43:10.834388 containerd[1476]: 2026-03-06 01:43:10.781 [INFO][3968] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 6 01:43:10.834388 containerd[1476]: 2026-03-06 01:43:10.789 [INFO][3968] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 6 01:43:10.834388 containerd[1476]: 2026-03-06 01:43:10.791 [INFO][3968] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 6 01:43:10.834388 containerd[1476]: 2026-03-06 01:43:10.797 [INFO][3968] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 6 01:43:10.834388 containerd[1476]: 2026-03-06 01:43:10.797 [INFO][3968] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b3805e073ec3f3304fff0c8c72921612bae6a79d865e5426441bd81c0a1a597e" host="localhost" Mar 6 01:43:10.834388 containerd[1476]: 2026-03-06 01:43:10.799 [INFO][3968] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b3805e073ec3f3304fff0c8c72921612bae6a79d865e5426441bd81c0a1a597e Mar 6 01:43:10.834388 containerd[1476]: 2026-03-06 01:43:10.806 [INFO][3968] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b3805e073ec3f3304fff0c8c72921612bae6a79d865e5426441bd81c0a1a597e" host="localhost" Mar 6 01:43:10.834388 containerd[1476]: 2026-03-06 01:43:10.812 [INFO][3968] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.b3805e073ec3f3304fff0c8c72921612bae6a79d865e5426441bd81c0a1a597e" host="localhost" Mar 6 01:43:10.834388 containerd[1476]: 2026-03-06 01:43:10.812 [INFO][3968] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.b3805e073ec3f3304fff0c8c72921612bae6a79d865e5426441bd81c0a1a597e" host="localhost" Mar 6 01:43:10.834388 containerd[1476]: 2026-03-06 01:43:10.812 [INFO][3968] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:43:10.834388 containerd[1476]: 2026-03-06 01:43:10.812 [INFO][3968] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="b3805e073ec3f3304fff0c8c72921612bae6a79d865e5426441bd81c0a1a597e" HandleID="k8s-pod-network.b3805e073ec3f3304fff0c8c72921612bae6a79d865e5426441bd81c0a1a597e" Workload="localhost-k8s-coredns--7d764666f9--pnmw7-eth0" Mar 6 01:43:10.835013 containerd[1476]: 2026-03-06 01:43:10.815 [INFO][3928] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b3805e073ec3f3304fff0c8c72921612bae6a79d865e5426441bd81c0a1a597e" Namespace="kube-system" Pod="coredns-7d764666f9-pnmw7" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--pnmw7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--pnmw7-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"90fa09d7-47f2-40ee-97f2-b3546e0b3fa4", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 42, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7d764666f9-pnmw7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicc303200f50", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:43:10.835013 containerd[1476]: 2026-03-06 01:43:10.815 [INFO][3928] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="b3805e073ec3f3304fff0c8c72921612bae6a79d865e5426441bd81c0a1a597e" Namespace="kube-system" Pod="coredns-7d764666f9-pnmw7" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--pnmw7-eth0" Mar 6 01:43:10.835013 containerd[1476]: 2026-03-06 01:43:10.815 [INFO][3928] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicc303200f50 ContainerID="b3805e073ec3f3304fff0c8c72921612bae6a79d865e5426441bd81c0a1a597e" Namespace="kube-system" Pod="coredns-7d764666f9-pnmw7" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--pnmw7-eth0" Mar 6 01:43:10.835013 containerd[1476]: 2026-03-06 01:43:10.821 [INFO][3928] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b3805e073ec3f3304fff0c8c72921612bae6a79d865e5426441bd81c0a1a597e" Namespace="kube-system" Pod="coredns-7d764666f9-pnmw7" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--pnmw7-eth0" Mar 6 01:43:10.835013 containerd[1476]: 2026-03-06 01:43:10.821 [INFO][3928] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b3805e073ec3f3304fff0c8c72921612bae6a79d865e5426441bd81c0a1a597e" Namespace="kube-system" Pod="coredns-7d764666f9-pnmw7" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--pnmw7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--pnmw7-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"90fa09d7-47f2-40ee-97f2-b3546e0b3fa4", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 42, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b3805e073ec3f3304fff0c8c72921612bae6a79d865e5426441bd81c0a1a597e", Pod:"coredns-7d764666f9-pnmw7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicc303200f50", MAC:"72:86:bc:2a:64:93", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:43:10.835013 containerd[1476]: 2026-03-06 01:43:10.831 [INFO][3928] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b3805e073ec3f3304fff0c8c72921612bae6a79d865e5426441bd81c0a1a597e" Namespace="kube-system" Pod="coredns-7d764666f9-pnmw7" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--pnmw7-eth0" Mar 6 01:43:10.847097 kubelet[2575]: I0306 01:43:10.847052 2575 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed20f622-f835-462f-a791-b755c458eb23-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Mar 6 01:43:10.847097 kubelet[2575]: I0306 01:43:10.847094 2575 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/ed20f622-f835-462f-a791-b755c458eb23-nginx-config\") on node \"localhost\" DevicePath \"\"" Mar 6 01:43:10.847097 kubelet[2575]: I0306 01:43:10.847104 2575 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ed20f622-f835-462f-a791-b755c458eb23-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Mar 6 01:43:10.847097 kubelet[2575]: I0306 01:43:10.847112 2575 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-x85kh\" (UniqueName: \"kubernetes.io/projected/ed20f622-f835-462f-a791-b755c458eb23-kube-api-access-x85kh\") on node \"localhost\" DevicePath \"\"" Mar 6 01:43:10.860682 containerd[1476]: time="2026-03-06T01:43:10.860356502Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:43:10.860682 containerd[1476]: time="2026-03-06T01:43:10.860436082Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:43:10.860682 containerd[1476]: time="2026-03-06T01:43:10.860455619Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:43:10.861101 containerd[1476]: time="2026-03-06T01:43:10.860611253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:43:10.884758 systemd[1]: Started cri-containerd-b3805e073ec3f3304fff0c8c72921612bae6a79d865e5426441bd81c0a1a597e.scope - libcontainer container b3805e073ec3f3304fff0c8c72921612bae6a79d865e5426441bd81c0a1a597e. Mar 6 01:43:10.903987 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 01:43:10.919711 systemd-networkd[1413]: cali5eeb73fccf4: Link UP Mar 6 01:43:10.919917 systemd-networkd[1413]: cali5eeb73fccf4: Gained carrier Mar 6 01:43:10.946486 containerd[1476]: 2026-03-06 01:43:10.685 [ERROR][3908] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 6 01:43:10.946486 containerd[1476]: 2026-03-06 01:43:10.701 [INFO][3908] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7d764666f9--wrbw9-eth0 coredns-7d764666f9- kube-system 90ab0297-0497-4e43-9006-2a17f86d78c6 937 0 2026-03-06 01:42:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7d764666f9-wrbw9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5eeb73fccf4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="eea10b76700461193a70d60ddf7b9f6f0530a9db6b6161243189ef93a4441d55" Namespace="kube-system" Pod="coredns-7d764666f9-wrbw9" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--wrbw9-" Mar 6 01:43:10.946486 containerd[1476]: 2026-03-06 01:43:10.701 [INFO][3908] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eea10b76700461193a70d60ddf7b9f6f0530a9db6b6161243189ef93a4441d55" Namespace="kube-system" Pod="coredns-7d764666f9-wrbw9" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--wrbw9-eth0" Mar 6 01:43:10.946486 containerd[1476]: 2026-03-06 01:43:10.771 [INFO][3965] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eea10b76700461193a70d60ddf7b9f6f0530a9db6b6161243189ef93a4441d55" HandleID="k8s-pod-network.eea10b76700461193a70d60ddf7b9f6f0530a9db6b6161243189ef93a4441d55" Workload="localhost-k8s-coredns--7d764666f9--wrbw9-eth0" Mar 6 01:43:10.946486 containerd[1476]: 2026-03-06 01:43:10.784 [INFO][3965] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="eea10b76700461193a70d60ddf7b9f6f0530a9db6b6161243189ef93a4441d55" HandleID="k8s-pod-network.eea10b76700461193a70d60ddf7b9f6f0530a9db6b6161243189ef93a4441d55" Workload="localhost-k8s-coredns--7d764666f9--wrbw9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ef7c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7d764666f9-wrbw9", "timestamp":"2026-03-06 01:43:10.771069786 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000608f20)} Mar 6 01:43:10.946486 containerd[1476]: 2026-03-06 01:43:10.785 [INFO][3965] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:43:10.946486 containerd[1476]: 2026-03-06 01:43:10.812 [INFO][3965] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:43:10.946486 containerd[1476]: 2026-03-06 01:43:10.812 [INFO][3965] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 6 01:43:10.946486 containerd[1476]: 2026-03-06 01:43:10.874 [INFO][3965] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.eea10b76700461193a70d60ddf7b9f6f0530a9db6b6161243189ef93a4441d55" host="localhost" Mar 6 01:43:10.946486 containerd[1476]: 2026-03-06 01:43:10.882 [INFO][3965] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 6 01:43:10.946486 containerd[1476]: 2026-03-06 01:43:10.890 [INFO][3965] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 6 01:43:10.946486 containerd[1476]: 2026-03-06 01:43:10.893 [INFO][3965] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 6 01:43:10.946486 containerd[1476]: 2026-03-06 01:43:10.896 [INFO][3965] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 6 01:43:10.946486 containerd[1476]: 2026-03-06 01:43:10.897 [INFO][3965] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.eea10b76700461193a70d60ddf7b9f6f0530a9db6b6161243189ef93a4441d55" host="localhost" Mar 6 01:43:10.946486 containerd[1476]: 2026-03-06 01:43:10.899 [INFO][3965] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.eea10b76700461193a70d60ddf7b9f6f0530a9db6b6161243189ef93a4441d55 Mar 6 01:43:10.946486 containerd[1476]: 2026-03-06 01:43:10.905 [INFO][3965] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.eea10b76700461193a70d60ddf7b9f6f0530a9db6b6161243189ef93a4441d55" host="localhost" Mar 6 01:43:10.946486 containerd[1476]: 2026-03-06 01:43:10.912 [INFO][3965] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.eea10b76700461193a70d60ddf7b9f6f0530a9db6b6161243189ef93a4441d55" host="localhost" Mar 6 01:43:10.946486 containerd[1476]: 2026-03-06 01:43:10.912 [INFO][3965] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.eea10b76700461193a70d60ddf7b9f6f0530a9db6b6161243189ef93a4441d55" host="localhost" Mar 6 01:43:10.946486 containerd[1476]: 2026-03-06 01:43:10.912 [INFO][3965] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:43:10.946486 containerd[1476]: 2026-03-06 01:43:10.912 [INFO][3965] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="eea10b76700461193a70d60ddf7b9f6f0530a9db6b6161243189ef93a4441d55" HandleID="k8s-pod-network.eea10b76700461193a70d60ddf7b9f6f0530a9db6b6161243189ef93a4441d55" Workload="localhost-k8s-coredns--7d764666f9--wrbw9-eth0" Mar 6 01:43:10.947134 containerd[1476]: 2026-03-06 01:43:10.915 [INFO][3908] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eea10b76700461193a70d60ddf7b9f6f0530a9db6b6161243189ef93a4441d55" Namespace="kube-system" Pod="coredns-7d764666f9-wrbw9" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--wrbw9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--wrbw9-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"90ab0297-0497-4e43-9006-2a17f86d78c6", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 42, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7d764666f9-wrbw9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5eeb73fccf4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:43:10.947134 containerd[1476]: 2026-03-06 01:43:10.916 [INFO][3908] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="eea10b76700461193a70d60ddf7b9f6f0530a9db6b6161243189ef93a4441d55" Namespace="kube-system" Pod="coredns-7d764666f9-wrbw9" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--wrbw9-eth0" Mar 6 01:43:10.947134 containerd[1476]: 2026-03-06 01:43:10.916 [INFO][3908] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5eeb73fccf4 ContainerID="eea10b76700461193a70d60ddf7b9f6f0530a9db6b6161243189ef93a4441d55" Namespace="kube-system" Pod="coredns-7d764666f9-wrbw9" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--wrbw9-eth0" Mar 6 01:43:10.947134 containerd[1476]: 2026-03-06 01:43:10.921 [INFO][3908] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eea10b76700461193a70d60ddf7b9f6f0530a9db6b6161243189ef93a4441d55" Namespace="kube-system" Pod="coredns-7d764666f9-wrbw9" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--wrbw9-eth0" Mar 6 01:43:10.947134 containerd[1476]: 2026-03-06 01:43:10.921 [INFO][3908] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eea10b76700461193a70d60ddf7b9f6f0530a9db6b6161243189ef93a4441d55" Namespace="kube-system" Pod="coredns-7d764666f9-wrbw9" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--wrbw9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--wrbw9-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"90ab0297-0497-4e43-9006-2a17f86d78c6", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 42, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eea10b76700461193a70d60ddf7b9f6f0530a9db6b6161243189ef93a4441d55", Pod:"coredns-7d764666f9-wrbw9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5eeb73fccf4", MAC:"d2:45:fb:6a:b9:70", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:43:10.947134 containerd[1476]: 2026-03-06 01:43:10.937 [INFO][3908] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eea10b76700461193a70d60ddf7b9f6f0530a9db6b6161243189ef93a4441d55" Namespace="kube-system" Pod="coredns-7d764666f9-wrbw9" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--wrbw9-eth0" Mar 6 01:43:10.980931 containerd[1476]: time="2026-03-06T01:43:10.980736198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-pnmw7,Uid:90fa09d7-47f2-40ee-97f2-b3546e0b3fa4,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3805e073ec3f3304fff0c8c72921612bae6a79d865e5426441bd81c0a1a597e\"" Mar 6 01:43:10.982666 kubelet[2575]: E0306 01:43:10.982603 2575 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:43:10.988825 containerd[1476]: time="2026-03-06T01:43:10.988306317Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:43:10.988825 containerd[1476]: time="2026-03-06T01:43:10.988395477Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:43:10.988825 containerd[1476]: time="2026-03-06T01:43:10.988443106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:43:10.990502 containerd[1476]: time="2026-03-06T01:43:10.989960539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:43:10.990502 containerd[1476]: time="2026-03-06T01:43:10.990413417Z" level=info msg="CreateContainer within sandbox \"b3805e073ec3f3304fff0c8c72921612bae6a79d865e5426441bd81c0a1a597e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 6 01:43:11.017157 containerd[1476]: time="2026-03-06T01:43:11.017046795Z" level=info msg="CreateContainer within sandbox \"b3805e073ec3f3304fff0c8c72921612bae6a79d865e5426441bd81c0a1a597e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"365a764751e223501a02baf1f137d363a25ae1b8cdac781d578829b9cfd86d1b\"" Mar 6 01:43:11.021943 containerd[1476]: time="2026-03-06T01:43:11.021847686Z" level=info msg="StartContainer for \"365a764751e223501a02baf1f137d363a25ae1b8cdac781d578829b9cfd86d1b\"" Mar 6 01:43:11.024744 systemd[1]: Started cri-containerd-eea10b76700461193a70d60ddf7b9f6f0530a9db6b6161243189ef93a4441d55.scope - libcontainer container eea10b76700461193a70d60ddf7b9f6f0530a9db6b6161243189ef93a4441d55. Mar 6 01:43:11.042554 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 01:43:11.074668 systemd[1]: Started cri-containerd-365a764751e223501a02baf1f137d363a25ae1b8cdac781d578829b9cfd86d1b.scope - libcontainer container 365a764751e223501a02baf1f137d363a25ae1b8cdac781d578829b9cfd86d1b. Mar 6 01:43:11.095702 containerd[1476]: time="2026-03-06T01:43:11.092510246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-wrbw9,Uid:90ab0297-0497-4e43-9006-2a17f86d78c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"eea10b76700461193a70d60ddf7b9f6f0530a9db6b6161243189ef93a4441d55\"" Mar 6 01:43:11.098781 systemd-networkd[1413]: calic70e484312f: Link UP Mar 6 01:43:11.099637 systemd-networkd[1413]: calic70e484312f: Gained carrier Mar 6 01:43:11.106587 kubelet[2575]: E0306 01:43:11.105875 2575 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:43:11.115682 containerd[1476]: time="2026-03-06T01:43:11.115648537Z" level=info msg="CreateContainer within sandbox \"eea10b76700461193a70d60ddf7b9f6f0530a9db6b6161243189ef93a4441d55\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 6 01:43:11.129170 containerd[1476]: 2026-03-06 01:43:10.710 [ERROR][3898] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 6 01:43:11.129170 containerd[1476]: 2026-03-06 01:43:10.728 [INFO][3898] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--769b589d67--dgxvb-eth0 calico-apiserver-769b589d67- calico-system fed33a75-c212-47b5-8552-87790ea76352 939 0 2026-03-06 01:42:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:769b589d67 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-769b589d67-dgxvb eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calic70e484312f [] [] }} ContainerID="c6f1b93739de14d3927b5585dc7a2047dab1b017b43ed738f5fec0c5386fc88c" Namespace="calico-system" Pod="calico-apiserver-769b589d67-dgxvb" WorkloadEndpoint="localhost-k8s-calico--apiserver--769b589d67--dgxvb-" Mar 6 01:43:11.129170 containerd[1476]: 2026-03-06 01:43:10.729 [INFO][3898] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c6f1b93739de14d3927b5585dc7a2047dab1b017b43ed738f5fec0c5386fc88c" Namespace="calico-system" Pod="calico-apiserver-769b589d67-dgxvb" WorkloadEndpoint="localhost-k8s-calico--apiserver--769b589d67--dgxvb-eth0" Mar 6 01:43:11.129170 containerd[1476]: 2026-03-06 01:43:10.781 [INFO][3989] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c6f1b93739de14d3927b5585dc7a2047dab1b017b43ed738f5fec0c5386fc88c" HandleID="k8s-pod-network.c6f1b93739de14d3927b5585dc7a2047dab1b017b43ed738f5fec0c5386fc88c" Workload="localhost-k8s-calico--apiserver--769b589d67--dgxvb-eth0" Mar 6 01:43:11.129170 containerd[1476]: 2026-03-06 01:43:10.794 [INFO][3989] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c6f1b93739de14d3927b5585dc7a2047dab1b017b43ed738f5fec0c5386fc88c" HandleID="k8s-pod-network.c6f1b93739de14d3927b5585dc7a2047dab1b017b43ed738f5fec0c5386fc88c" Workload="localhost-k8s-calico--apiserver--769b589d67--dgxvb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002774a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-769b589d67-dgxvb", "timestamp":"2026-03-06 01:43:10.781605289 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003f4f20)} Mar 6 01:43:11.129170 containerd[1476]: 2026-03-06 01:43:10.794 [INFO][3989] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:43:11.129170 containerd[1476]: 2026-03-06 01:43:10.914 [INFO][3989] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:43:11.129170 containerd[1476]: 2026-03-06 01:43:10.914 [INFO][3989] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 6 01:43:11.129170 containerd[1476]: 2026-03-06 01:43:10.978 [INFO][3989] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c6f1b93739de14d3927b5585dc7a2047dab1b017b43ed738f5fec0c5386fc88c" host="localhost" Mar 6 01:43:11.129170 containerd[1476]: 2026-03-06 01:43:11.000 [INFO][3989] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 6 01:43:11.129170 containerd[1476]: 2026-03-06 01:43:11.008 [INFO][3989] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 6 01:43:11.129170 containerd[1476]: 2026-03-06 01:43:11.013 [INFO][3989] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 6 01:43:11.129170 containerd[1476]: 2026-03-06 01:43:11.026 [INFO][3989] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 6 01:43:11.129170 containerd[1476]: 2026-03-06 01:43:11.026 [INFO][3989] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c6f1b93739de14d3927b5585dc7a2047dab1b017b43ed738f5fec0c5386fc88c" host="localhost" Mar 6 01:43:11.129170 containerd[1476]: 2026-03-06 01:43:11.056 [INFO][3989] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c6f1b93739de14d3927b5585dc7a2047dab1b017b43ed738f5fec0c5386fc88c Mar 6 01:43:11.129170 containerd[1476]: 2026-03-06 01:43:11.062 [INFO][3989] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c6f1b93739de14d3927b5585dc7a2047dab1b017b43ed738f5fec0c5386fc88c" host="localhost" Mar 6 01:43:11.129170 containerd[1476]: 2026-03-06 01:43:11.081 [INFO][3989] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.c6f1b93739de14d3927b5585dc7a2047dab1b017b43ed738f5fec0c5386fc88c" host="localhost" Mar 6 01:43:11.129170 containerd[1476]: 2026-03-06 01:43:11.081 [INFO][3989] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.c6f1b93739de14d3927b5585dc7a2047dab1b017b43ed738f5fec0c5386fc88c" host="localhost" Mar 6 01:43:11.129170 containerd[1476]: 2026-03-06 01:43:11.081 [INFO][3989] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:43:11.129170 containerd[1476]: 2026-03-06 01:43:11.082 [INFO][3989] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="c6f1b93739de14d3927b5585dc7a2047dab1b017b43ed738f5fec0c5386fc88c" HandleID="k8s-pod-network.c6f1b93739de14d3927b5585dc7a2047dab1b017b43ed738f5fec0c5386fc88c" Workload="localhost-k8s-calico--apiserver--769b589d67--dgxvb-eth0" Mar 6 01:43:11.129896 containerd[1476]: 2026-03-06 01:43:11.090 [INFO][3898] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c6f1b93739de14d3927b5585dc7a2047dab1b017b43ed738f5fec0c5386fc88c" Namespace="calico-system" Pod="calico-apiserver-769b589d67-dgxvb" WorkloadEndpoint="localhost-k8s-calico--apiserver--769b589d67--dgxvb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--769b589d67--dgxvb-eth0", GenerateName:"calico-apiserver-769b589d67-", Namespace:"calico-system", SelfLink:"", UID:"fed33a75-c212-47b5-8552-87790ea76352", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 42, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"769b589d67", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-769b589d67-dgxvb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calic70e484312f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:43:11.129896 containerd[1476]: 2026-03-06 01:43:11.091 [INFO][3898] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="c6f1b93739de14d3927b5585dc7a2047dab1b017b43ed738f5fec0c5386fc88c" Namespace="calico-system" Pod="calico-apiserver-769b589d67-dgxvb" WorkloadEndpoint="localhost-k8s-calico--apiserver--769b589d67--dgxvb-eth0" Mar 6 01:43:11.129896 containerd[1476]: 2026-03-06 01:43:11.091 [INFO][3898] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic70e484312f ContainerID="c6f1b93739de14d3927b5585dc7a2047dab1b017b43ed738f5fec0c5386fc88c" Namespace="calico-system" Pod="calico-apiserver-769b589d67-dgxvb" WorkloadEndpoint="localhost-k8s-calico--apiserver--769b589d67--dgxvb-eth0" Mar 6 01:43:11.129896 containerd[1476]: 2026-03-06 01:43:11.099 [INFO][3898] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c6f1b93739de14d3927b5585dc7a2047dab1b017b43ed738f5fec0c5386fc88c" Namespace="calico-system" Pod="calico-apiserver-769b589d67-dgxvb" WorkloadEndpoint="localhost-k8s-calico--apiserver--769b589d67--dgxvb-eth0" Mar 6 01:43:11.129896 containerd[1476]: 2026-03-06 01:43:11.099 [INFO][3898] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c6f1b93739de14d3927b5585dc7a2047dab1b017b43ed738f5fec0c5386fc88c" Namespace="calico-system" Pod="calico-apiserver-769b589d67-dgxvb" WorkloadEndpoint="localhost-k8s-calico--apiserver--769b589d67--dgxvb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--769b589d67--dgxvb-eth0", GenerateName:"calico-apiserver-769b589d67-", Namespace:"calico-system", SelfLink:"", UID:"fed33a75-c212-47b5-8552-87790ea76352", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 42, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"769b589d67", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c6f1b93739de14d3927b5585dc7a2047dab1b017b43ed738f5fec0c5386fc88c", Pod:"calico-apiserver-769b589d67-dgxvb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calic70e484312f", MAC:"46:8a:b7:f0:90:c4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:43:11.129896 containerd[1476]: 2026-03-06 01:43:11.121 [INFO][3898] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c6f1b93739de14d3927b5585dc7a2047dab1b017b43ed738f5fec0c5386fc88c" Namespace="calico-system" Pod="calico-apiserver-769b589d67-dgxvb" WorkloadEndpoint="localhost-k8s-calico--apiserver--769b589d67--dgxvb-eth0" Mar 6 01:43:11.169087 systemd[1]: Created slice kubepods-besteffort-pod2ad6f57d_dd1e_42e4_a3fb_1ed523c0dea6.slice - libcontainer container kubepods-besteffort-pod2ad6f57d_dd1e_42e4_a3fb_1ed523c0dea6.slice. Mar 6 01:43:11.181043 containerd[1476]: time="2026-03-06T01:43:11.180851017Z" level=info msg="StartContainer for \"365a764751e223501a02baf1f137d363a25ae1b8cdac781d578829b9cfd86d1b\" returns successfully" Mar 6 01:43:11.189610 containerd[1476]: time="2026-03-06T01:43:11.189425950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cv4hv,Uid:2ad6f57d-dd1e-42e4-a3fb-1ed523c0dea6,Namespace:calico-system,Attempt:0,}" Mar 6 01:43:11.194309 systemd-networkd[1413]: cali61e94dda459: Link UP Mar 6 01:43:11.195159 systemd[1]: Removed slice kubepods-besteffort-poded20f622_f835_462f_a791_b755c458eb23.slice - libcontainer container kubepods-besteffort-poded20f622_f835_462f_a791_b755c458eb23.slice. Mar 6 01:43:11.196978 systemd-networkd[1413]: cali61e94dda459: Gained carrier Mar 6 01:43:11.233334 containerd[1476]: time="2026-03-06T01:43:11.232603203Z" level=info msg="CreateContainer within sandbox \"eea10b76700461193a70d60ddf7b9f6f0530a9db6b6161243189ef93a4441d55\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fc8e4059dbe2d2d344299f84747660539a4f691d43a8f5cae217f9a902fdcbd1\"" Mar 6 01:43:11.234708 containerd[1476]: 2026-03-06 01:43:10.704 [ERROR][3906] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 6 01:43:11.234708 containerd[1476]: 2026-03-06 01:43:10.725 [INFO][3906] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--769b589d67--kcwqc-eth0 calico-apiserver-769b589d67- calico-system 57e3989c-45dd-43eb-a32d-f906d625c6d0 940 0 2026-03-06 01:42:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:769b589d67 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-769b589d67-kcwqc eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali61e94dda459 [] [] }} ContainerID="fb3923822e8f9fc49d249401d7620829cb40fa27d955470e3a2a699107cc29f2" Namespace="calico-system" Pod="calico-apiserver-769b589d67-kcwqc" WorkloadEndpoint="localhost-k8s-calico--apiserver--769b589d67--kcwqc-" Mar 6 01:43:11.234708 containerd[1476]: 2026-03-06 01:43:10.725 [INFO][3906] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fb3923822e8f9fc49d249401d7620829cb40fa27d955470e3a2a699107cc29f2" Namespace="calico-system" Pod="calico-apiserver-769b589d67-kcwqc" WorkloadEndpoint="localhost-k8s-calico--apiserver--769b589d67--kcwqc-eth0" Mar 6 01:43:11.234708 containerd[1476]: 2026-03-06 01:43:10.791 [INFO][3980] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fb3923822e8f9fc49d249401d7620829cb40fa27d955470e3a2a699107cc29f2" HandleID="k8s-pod-network.fb3923822e8f9fc49d249401d7620829cb40fa27d955470e3a2a699107cc29f2" Workload="localhost-k8s-calico--apiserver--769b589d67--kcwqc-eth0" Mar 6 01:43:11.234708 containerd[1476]: 2026-03-06 01:43:10.801 [INFO][3980] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="fb3923822e8f9fc49d249401d7620829cb40fa27d955470e3a2a699107cc29f2" HandleID="k8s-pod-network.fb3923822e8f9fc49d249401d7620829cb40fa27d955470e3a2a699107cc29f2" Workload="localhost-k8s-calico--apiserver--769b589d67--kcwqc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004f04a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-769b589d67-kcwqc", "timestamp":"2026-03-06 01:43:10.791646179 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002b1080)} Mar 6 01:43:11.234708 containerd[1476]: 2026-03-06 01:43:10.801 [INFO][3980] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:43:11.234708 containerd[1476]: 2026-03-06 01:43:11.083 [INFO][3980] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:43:11.234708 containerd[1476]: 2026-03-06 01:43:11.084 [INFO][3980] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 6 01:43:11.234708 containerd[1476]: 2026-03-06 01:43:11.095 [INFO][3980] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.fb3923822e8f9fc49d249401d7620829cb40fa27d955470e3a2a699107cc29f2" host="localhost" Mar 6 01:43:11.234708 containerd[1476]: 2026-03-06 01:43:11.107 [INFO][3980] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 6 01:43:11.234708 containerd[1476]: 2026-03-06 01:43:11.115 [INFO][3980] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 6 01:43:11.234708 containerd[1476]: 2026-03-06 01:43:11.121 [INFO][3980] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 6 01:43:11.234708 containerd[1476]: 2026-03-06 01:43:11.128 [INFO][3980] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 6 01:43:11.234708 containerd[1476]: 2026-03-06 01:43:11.128 [INFO][3980] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fb3923822e8f9fc49d249401d7620829cb40fa27d955470e3a2a699107cc29f2" host="localhost" Mar 6 01:43:11.234708 containerd[1476]: 2026-03-06 01:43:11.131 [INFO][3980] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.fb3923822e8f9fc49d249401d7620829cb40fa27d955470e3a2a699107cc29f2 Mar 6 01:43:11.234708 containerd[1476]: 2026-03-06 01:43:11.138 [INFO][3980] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fb3923822e8f9fc49d249401d7620829cb40fa27d955470e3a2a699107cc29f2" host="localhost" Mar 6 01:43:11.234708 containerd[1476]: 2026-03-06 01:43:11.159 [INFO][3980] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.fb3923822e8f9fc49d249401d7620829cb40fa27d955470e3a2a699107cc29f2" host="localhost" Mar 6 01:43:11.234708 containerd[1476]: 2026-03-06 01:43:11.177 [INFO][3980] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.fb3923822e8f9fc49d249401d7620829cb40fa27d955470e3a2a699107cc29f2" host="localhost" Mar 6 01:43:11.234708 containerd[1476]: 2026-03-06 01:43:11.178 [INFO][3980] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:43:11.234708 containerd[1476]: 2026-03-06 01:43:11.178 [INFO][3980] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="fb3923822e8f9fc49d249401d7620829cb40fa27d955470e3a2a699107cc29f2" HandleID="k8s-pod-network.fb3923822e8f9fc49d249401d7620829cb40fa27d955470e3a2a699107cc29f2" Workload="localhost-k8s-calico--apiserver--769b589d67--kcwqc-eth0" Mar 6 01:43:11.235800 containerd[1476]: 2026-03-06 01:43:11.184 [INFO][3906] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fb3923822e8f9fc49d249401d7620829cb40fa27d955470e3a2a699107cc29f2" Namespace="calico-system" Pod="calico-apiserver-769b589d67-kcwqc" WorkloadEndpoint="localhost-k8s-calico--apiserver--769b589d67--kcwqc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--769b589d67--kcwqc-eth0", GenerateName:"calico-apiserver-769b589d67-", Namespace:"calico-system", SelfLink:"", UID:"57e3989c-45dd-43eb-a32d-f906d625c6d0", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 42, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"769b589d67", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-769b589d67-kcwqc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali61e94dda459", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:43:11.235800 containerd[1476]: 2026-03-06 01:43:11.184 [INFO][3906] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="fb3923822e8f9fc49d249401d7620829cb40fa27d955470e3a2a699107cc29f2" Namespace="calico-system" Pod="calico-apiserver-769b589d67-kcwqc" WorkloadEndpoint="localhost-k8s-calico--apiserver--769b589d67--kcwqc-eth0" Mar 6 01:43:11.235800 containerd[1476]: 2026-03-06 01:43:11.184 [INFO][3906] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali61e94dda459 ContainerID="fb3923822e8f9fc49d249401d7620829cb40fa27d955470e3a2a699107cc29f2" Namespace="calico-system" Pod="calico-apiserver-769b589d67-kcwqc" WorkloadEndpoint="localhost-k8s-calico--apiserver--769b589d67--kcwqc-eth0" Mar 6 01:43:11.235800 containerd[1476]: 2026-03-06 01:43:11.199 [INFO][3906] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fb3923822e8f9fc49d249401d7620829cb40fa27d955470e3a2a699107cc29f2" Namespace="calico-system" Pod="calico-apiserver-769b589d67-kcwqc" WorkloadEndpoint="localhost-k8s-calico--apiserver--769b589d67--kcwqc-eth0" Mar 6 01:43:11.235800 containerd[1476]: 2026-03-06 01:43:11.200 [INFO][3906] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fb3923822e8f9fc49d249401d7620829cb40fa27d955470e3a2a699107cc29f2" Namespace="calico-system" Pod="calico-apiserver-769b589d67-kcwqc" WorkloadEndpoint="localhost-k8s-calico--apiserver--769b589d67--kcwqc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--769b589d67--kcwqc-eth0", GenerateName:"calico-apiserver-769b589d67-", Namespace:"calico-system", SelfLink:"", UID:"57e3989c-45dd-43eb-a32d-f906d625c6d0", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 42, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"769b589d67", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fb3923822e8f9fc49d249401d7620829cb40fa27d955470e3a2a699107cc29f2", Pod:"calico-apiserver-769b589d67-kcwqc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali61e94dda459", MAC:"c6:22:d8:61:63:f3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:43:11.235800 containerd[1476]: 2026-03-06 01:43:11.218 [INFO][3906] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fb3923822e8f9fc49d249401d7620829cb40fa27d955470e3a2a699107cc29f2" Namespace="calico-system" Pod="calico-apiserver-769b589d67-kcwqc" WorkloadEndpoint="localhost-k8s-calico--apiserver--769b589d67--kcwqc-eth0" Mar 6 01:43:11.235800 containerd[1476]: time="2026-03-06T01:43:11.235213555Z" level=info msg="StartContainer for \"fc8e4059dbe2d2d344299f84747660539a4f691d43a8f5cae217f9a902fdcbd1\"" Mar 6 01:43:11.283383 containerd[1476]: time="2026-03-06T01:43:11.276990207Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:43:11.283383 containerd[1476]: time="2026-03-06T01:43:11.280446780Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:43:11.283383 containerd[1476]: time="2026-03-06T01:43:11.280464214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:43:11.283383 containerd[1476]: time="2026-03-06T01:43:11.280640888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:43:11.318165 systemd-networkd[1413]: cali6d3e954de4c: Link UP Mar 6 01:43:11.329454 systemd-networkd[1413]: cali6d3e954de4c: Gained carrier Mar 6 01:43:11.365000 systemd[1]: Started cri-containerd-fc8e4059dbe2d2d344299f84747660539a4f691d43a8f5cae217f9a902fdcbd1.scope - libcontainer container fc8e4059dbe2d2d344299f84747660539a4f691d43a8f5cae217f9a902fdcbd1. Mar 6 01:43:11.370322 containerd[1476]: 2026-03-06 01:43:10.713 [ERROR][3944] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 6 01:43:11.370322 containerd[1476]: 2026-03-06 01:43:10.737 [INFO][3944] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--9f7667bb8--5c9lf-eth0 goldmane-9f7667bb8- calico-system 763ef55a-80e2-49cd-8289-67ad48649a32 938 0 2026-03-06 01:42:51 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9f7667bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-9f7667bb8-5c9lf eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali6d3e954de4c [] [] }} ContainerID="94c17a636f3c2da625c9402373ac06f6ac907f8a99d7e9878f9e972616bb6265" Namespace="calico-system" Pod="goldmane-9f7667bb8-5c9lf" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--5c9lf-" Mar 6 01:43:11.370322 containerd[1476]: 2026-03-06 01:43:10.737 [INFO][3944] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="94c17a636f3c2da625c9402373ac06f6ac907f8a99d7e9878f9e972616bb6265" Namespace="calico-system" Pod="goldmane-9f7667bb8-5c9lf" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--5c9lf-eth0" Mar 6 01:43:11.370322 containerd[1476]: 2026-03-06 01:43:10.796 [INFO][3996] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="94c17a636f3c2da625c9402373ac06f6ac907f8a99d7e9878f9e972616bb6265" HandleID="k8s-pod-network.94c17a636f3c2da625c9402373ac06f6ac907f8a99d7e9878f9e972616bb6265" Workload="localhost-k8s-goldmane--9f7667bb8--5c9lf-eth0" Mar 6 01:43:11.370322 containerd[1476]: 2026-03-06 01:43:10.805 [INFO][3996] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="94c17a636f3c2da625c9402373ac06f6ac907f8a99d7e9878f9e972616bb6265" HandleID="k8s-pod-network.94c17a636f3c2da625c9402373ac06f6ac907f8a99d7e9878f9e972616bb6265" Workload="localhost-k8s-goldmane--9f7667bb8--5c9lf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00059cbf0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-9f7667bb8-5c9lf", "timestamp":"2026-03-06 01:43:10.796460424 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00067a2c0)} Mar 6 01:43:11.370322 containerd[1476]: 2026-03-06 01:43:10.806 [INFO][3996] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:43:11.370322 containerd[1476]: 2026-03-06 01:43:11.179 [INFO][3996] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:43:11.370322 containerd[1476]: 2026-03-06 01:43:11.179 [INFO][3996] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 6 01:43:11.370322 containerd[1476]: 2026-03-06 01:43:11.191 [INFO][3996] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.94c17a636f3c2da625c9402373ac06f6ac907f8a99d7e9878f9e972616bb6265" host="localhost" Mar 6 01:43:11.370322 containerd[1476]: 2026-03-06 01:43:11.214 [INFO][3996] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 6 01:43:11.370322 containerd[1476]: 2026-03-06 01:43:11.226 [INFO][3996] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 6 01:43:11.370322 containerd[1476]: 2026-03-06 01:43:11.230 [INFO][3996] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 6 01:43:11.370322 containerd[1476]: 2026-03-06 01:43:11.249 [INFO][3996] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 6 01:43:11.370322 containerd[1476]: 2026-03-06 01:43:11.249 [INFO][3996] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.94c17a636f3c2da625c9402373ac06f6ac907f8a99d7e9878f9e972616bb6265" host="localhost" Mar 6 01:43:11.370322 containerd[1476]: 2026-03-06 01:43:11.262 [INFO][3996] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.94c17a636f3c2da625c9402373ac06f6ac907f8a99d7e9878f9e972616bb6265 Mar 6 01:43:11.370322 containerd[1476]: 2026-03-06 01:43:11.277 [INFO][3996] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.94c17a636f3c2da625c9402373ac06f6ac907f8a99d7e9878f9e972616bb6265" host="localhost" Mar 6 01:43:11.370322 containerd[1476]: 2026-03-06 01:43:11.290 [INFO][3996] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.94c17a636f3c2da625c9402373ac06f6ac907f8a99d7e9878f9e972616bb6265" host="localhost" Mar 6 01:43:11.370322 containerd[1476]: 2026-03-06 01:43:11.291 [INFO][3996] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.94c17a636f3c2da625c9402373ac06f6ac907f8a99d7e9878f9e972616bb6265" host="localhost" Mar 6 01:43:11.370322 containerd[1476]: 2026-03-06 01:43:11.291 [INFO][3996] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:43:11.370322 containerd[1476]: 2026-03-06 01:43:11.291 [INFO][3996] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="94c17a636f3c2da625c9402373ac06f6ac907f8a99d7e9878f9e972616bb6265" HandleID="k8s-pod-network.94c17a636f3c2da625c9402373ac06f6ac907f8a99d7e9878f9e972616bb6265" Workload="localhost-k8s-goldmane--9f7667bb8--5c9lf-eth0" Mar 6 01:43:11.371153 containerd[1476]: 2026-03-06 01:43:11.302 [INFO][3944] cni-plugin/k8s.go 418: Populated endpoint ContainerID="94c17a636f3c2da625c9402373ac06f6ac907f8a99d7e9878f9e972616bb6265" Namespace="calico-system" Pod="goldmane-9f7667bb8-5c9lf" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--5c9lf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--5c9lf-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"763ef55a-80e2-49cd-8289-67ad48649a32", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 42, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-9f7667bb8-5c9lf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6d3e954de4c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:43:11.371153 containerd[1476]: 2026-03-06 01:43:11.305 [INFO][3944] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="94c17a636f3c2da625c9402373ac06f6ac907f8a99d7e9878f9e972616bb6265" Namespace="calico-system" Pod="goldmane-9f7667bb8-5c9lf" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--5c9lf-eth0" Mar 6 01:43:11.371153 containerd[1476]: 2026-03-06 01:43:11.305 [INFO][3944] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6d3e954de4c ContainerID="94c17a636f3c2da625c9402373ac06f6ac907f8a99d7e9878f9e972616bb6265" Namespace="calico-system" Pod="goldmane-9f7667bb8-5c9lf" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--5c9lf-eth0" Mar 6 01:43:11.371153 containerd[1476]: 2026-03-06 01:43:11.341 [INFO][3944] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="94c17a636f3c2da625c9402373ac06f6ac907f8a99d7e9878f9e972616bb6265" Namespace="calico-system" Pod="goldmane-9f7667bb8-5c9lf" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--5c9lf-eth0" Mar 6 01:43:11.371153 containerd[1476]: 2026-03-06 01:43:11.345 [INFO][3944] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="94c17a636f3c2da625c9402373ac06f6ac907f8a99d7e9878f9e972616bb6265" Namespace="calico-system" Pod="goldmane-9f7667bb8-5c9lf" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--5c9lf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--5c9lf-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"763ef55a-80e2-49cd-8289-67ad48649a32", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 42, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"94c17a636f3c2da625c9402373ac06f6ac907f8a99d7e9878f9e972616bb6265", Pod:"goldmane-9f7667bb8-5c9lf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6d3e954de4c", MAC:"5a:5b:7f:c7:da:72", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:43:11.371153 containerd[1476]: 2026-03-06 01:43:11.366 [INFO][3944] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="94c17a636f3c2da625c9402373ac06f6ac907f8a99d7e9878f9e972616bb6265" Namespace="calico-system" Pod="goldmane-9f7667bb8-5c9lf" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--5c9lf-eth0" Mar 6 01:43:11.396725 systemd[1]: Started cri-containerd-c6f1b93739de14d3927b5585dc7a2047dab1b017b43ed738f5fec0c5386fc88c.scope - libcontainer container c6f1b93739de14d3927b5585dc7a2047dab1b017b43ed738f5fec0c5386fc88c. Mar 6 01:43:11.400672 containerd[1476]: time="2026-03-06T01:43:11.400215912Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:43:11.400672 containerd[1476]: time="2026-03-06T01:43:11.400307315Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:43:11.400672 containerd[1476]: time="2026-03-06T01:43:11.400321020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:43:11.400672 containerd[1476]: time="2026-03-06T01:43:11.400427712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:43:11.411778 containerd[1476]: time="2026-03-06T01:43:11.411504660Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:43:11.413462 containerd[1476]: time="2026-03-06T01:43:11.411983067Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:43:11.413462 containerd[1476]: time="2026-03-06T01:43:11.411997003Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:43:11.413462 containerd[1476]: time="2026-03-06T01:43:11.412668945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:43:11.428645 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 01:43:11.447777 systemd[1]: Started cri-containerd-94c17a636f3c2da625c9402373ac06f6ac907f8a99d7e9878f9e972616bb6265.scope - libcontainer container 94c17a636f3c2da625c9402373ac06f6ac907f8a99d7e9878f9e972616bb6265. Mar 6 01:43:11.466076 systemd[1]: Started cri-containerd-fb3923822e8f9fc49d249401d7620829cb40fa27d955470e3a2a699107cc29f2.scope - libcontainer container fb3923822e8f9fc49d249401d7620829cb40fa27d955470e3a2a699107cc29f2. Mar 6 01:43:11.472040 containerd[1476]: time="2026-03-06T01:43:11.471929134Z" level=info msg="StartContainer for \"fc8e4059dbe2d2d344299f84747660539a4f691d43a8f5cae217f9a902fdcbd1\" returns successfully" Mar 6 01:43:11.497981 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 01:43:11.523921 containerd[1476]: time="2026-03-06T01:43:11.523835673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-769b589d67-dgxvb,Uid:fed33a75-c212-47b5-8552-87790ea76352,Namespace:calico-system,Attempt:0,} returns sandbox id \"c6f1b93739de14d3927b5585dc7a2047dab1b017b43ed738f5fec0c5386fc88c\"" Mar 6 01:43:11.528400 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 01:43:11.565328 containerd[1476]: time="2026-03-06T01:43:11.565227536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-769b589d67-kcwqc,Uid:57e3989c-45dd-43eb-a32d-f906d625c6d0,Namespace:calico-system,Attempt:0,} returns sandbox id \"fb3923822e8f9fc49d249401d7620829cb40fa27d955470e3a2a699107cc29f2\"" Mar 6 01:43:11.581463 kubelet[2575]: E0306 01:43:11.581241 2575 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:43:11.586064 systemd[1]: var-lib-kubelet-pods-ed20f622\x2df835\x2d462f\x2da791\x2db755c458eb23-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx85kh.mount: Deactivated successfully. Mar 6 01:43:11.586690 systemd[1]: var-lib-kubelet-pods-ed20f622\x2df835\x2d462f\x2da791\x2db755c458eb23-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 6 01:43:11.601219 kubelet[2575]: I0306 01:43:11.600888 2575 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-wrbw9" podStartSLOduration=31.600870928 podStartE2EDuration="31.600870928s" podCreationTimestamp="2026-03-06 01:42:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 01:43:11.598468443 +0000 UTC m=+37.135072477" watchObservedRunningTime="2026-03-06 01:43:11.600870928 +0000 UTC m=+37.137474983" Mar 6 01:43:11.615214 kubelet[2575]: E0306 01:43:11.614502 2575 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:43:11.664156 systemd-networkd[1413]: calia4a08b6859a: Link UP Mar 6 01:43:11.666241 systemd-networkd[1413]: calia4a08b6859a: Gained carrier Mar 6 01:43:11.704253 kubelet[2575]: I0306 01:43:11.703979 2575 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-pnmw7" podStartSLOduration=31.703914047 podStartE2EDuration="31.703914047s" podCreationTimestamp="2026-03-06 01:42:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 01:43:11.703617997 +0000 UTC m=+37.240222042" watchObservedRunningTime="2026-03-06 01:43:11.703914047 +0000 UTC m=+37.240567276" Mar 6 01:43:11.722988 systemd-networkd[1413]: cali6886e4bfcc6: Gained IPv6LL Mar 6 01:43:11.725147 systemd[1]: run-containerd-runc-k8s.io-86a1042dca1558125bec0f11aef8bd7e94e2973be95c96f6309b86f6709fa9fd-runc.o9G4h9.mount: Deactivated successfully. Mar 6 01:43:11.740039 systemd[1]: Created slice kubepods-besteffort-pod1f99c1bc_0e67_404c_8a16_83d04581bbea.slice - libcontainer container kubepods-besteffort-pod1f99c1bc_0e67_404c_8a16_83d04581bbea.slice. Mar 6 01:43:11.787193 containerd[1476]: 2026-03-06 01:43:11.403 [ERROR][4180] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 6 01:43:11.787193 containerd[1476]: 2026-03-06 01:43:11.422 [INFO][4180] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--cv4hv-eth0 csi-node-driver- calico-system 2ad6f57d-dd1e-42e4-a3fb-1ed523c0dea6 783 0 2026-03-06 01:42:54 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:589b8b8d94 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-cv4hv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia4a08b6859a [] [] }} ContainerID="accdbe9ed9cca34d2919a68e704ea3a155d44a9880502a909f36c453aed9c339" Namespace="calico-system" Pod="csi-node-driver-cv4hv" WorkloadEndpoint="localhost-k8s-csi--node--driver--cv4hv-" Mar 6 01:43:11.787193 containerd[1476]: 2026-03-06 01:43:11.422 [INFO][4180] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="accdbe9ed9cca34d2919a68e704ea3a155d44a9880502a909f36c453aed9c339" Namespace="calico-system" Pod="csi-node-driver-cv4hv" WorkloadEndpoint="localhost-k8s-csi--node--driver--cv4hv-eth0" Mar 6 01:43:11.787193 containerd[1476]: 2026-03-06 01:43:11.497 [INFO][4305] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="accdbe9ed9cca34d2919a68e704ea3a155d44a9880502a909f36c453aed9c339" HandleID="k8s-pod-network.accdbe9ed9cca34d2919a68e704ea3a155d44a9880502a909f36c453aed9c339" Workload="localhost-k8s-csi--node--driver--cv4hv-eth0" Mar 6 01:43:11.787193 containerd[1476]: 2026-03-06 01:43:11.531 [INFO][4305] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="accdbe9ed9cca34d2919a68e704ea3a155d44a9880502a909f36c453aed9c339" HandleID="k8s-pod-network.accdbe9ed9cca34d2919a68e704ea3a155d44a9880502a909f36c453aed9c339" Workload="localhost-k8s-csi--node--driver--cv4hv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000348b80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-cv4hv", "timestamp":"2026-03-06 01:43:11.49722313 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00057fb80)} Mar 6 01:43:11.787193 containerd[1476]: 2026-03-06 01:43:11.532 [INFO][4305] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:43:11.787193 containerd[1476]: 2026-03-06 01:43:11.532 [INFO][4305] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:43:11.787193 containerd[1476]: 2026-03-06 01:43:11.532 [INFO][4305] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 6 01:43:11.787193 containerd[1476]: 2026-03-06 01:43:11.538 [INFO][4305] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.accdbe9ed9cca34d2919a68e704ea3a155d44a9880502a909f36c453aed9c339" host="localhost" Mar 6 01:43:11.787193 containerd[1476]: 2026-03-06 01:43:11.560 [INFO][4305] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 6 01:43:11.787193 containerd[1476]: 2026-03-06 01:43:11.575 [INFO][4305] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 6 01:43:11.787193 containerd[1476]: 2026-03-06 01:43:11.584 [INFO][4305] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 6 01:43:11.787193 containerd[1476]: 2026-03-06 01:43:11.590 [INFO][4305] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 6 01:43:11.787193 containerd[1476]: 2026-03-06 01:43:11.591 [INFO][4305] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.accdbe9ed9cca34d2919a68e704ea3a155d44a9880502a909f36c453aed9c339" host="localhost" Mar 6 01:43:11.787193 containerd[1476]: 2026-03-06 01:43:11.596 [INFO][4305] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.accdbe9ed9cca34d2919a68e704ea3a155d44a9880502a909f36c453aed9c339 Mar 6 01:43:11.787193 containerd[1476]: 2026-03-06 01:43:11.611 [INFO][4305] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.accdbe9ed9cca34d2919a68e704ea3a155d44a9880502a909f36c453aed9c339" host="localhost" Mar 6 01:43:11.787193 containerd[1476]: 2026-03-06 01:43:11.625 [INFO][4305] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.accdbe9ed9cca34d2919a68e704ea3a155d44a9880502a909f36c453aed9c339" host="localhost" Mar 6 01:43:11.787193 containerd[1476]: 2026-03-06 01:43:11.625 [INFO][4305] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.accdbe9ed9cca34d2919a68e704ea3a155d44a9880502a909f36c453aed9c339" host="localhost" Mar 6 01:43:11.787193 containerd[1476]: 2026-03-06 01:43:11.625 [INFO][4305] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:43:11.787193 containerd[1476]: 2026-03-06 01:43:11.625 [INFO][4305] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="accdbe9ed9cca34d2919a68e704ea3a155d44a9880502a909f36c453aed9c339" HandleID="k8s-pod-network.accdbe9ed9cca34d2919a68e704ea3a155d44a9880502a909f36c453aed9c339" Workload="localhost-k8s-csi--node--driver--cv4hv-eth0" Mar 6 01:43:11.788720 containerd[1476]: 2026-03-06 01:43:11.646 [INFO][4180] cni-plugin/k8s.go 418: Populated endpoint ContainerID="accdbe9ed9cca34d2919a68e704ea3a155d44a9880502a909f36c453aed9c339" Namespace="calico-system" Pod="csi-node-driver-cv4hv" WorkloadEndpoint="localhost-k8s-csi--node--driver--cv4hv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--cv4hv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2ad6f57d-dd1e-42e4-a3fb-1ed523c0dea6", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 42, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-cv4hv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia4a08b6859a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:43:11.788720 containerd[1476]: 2026-03-06 01:43:11.648 [INFO][4180] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="accdbe9ed9cca34d2919a68e704ea3a155d44a9880502a909f36c453aed9c339" Namespace="calico-system" Pod="csi-node-driver-cv4hv" WorkloadEndpoint="localhost-k8s-csi--node--driver--cv4hv-eth0" Mar 6 01:43:11.788720 containerd[1476]: 2026-03-06 01:43:11.648 [INFO][4180] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia4a08b6859a ContainerID="accdbe9ed9cca34d2919a68e704ea3a155d44a9880502a909f36c453aed9c339" Namespace="calico-system" Pod="csi-node-driver-cv4hv" WorkloadEndpoint="localhost-k8s-csi--node--driver--cv4hv-eth0" Mar 6 01:43:11.788720 containerd[1476]: 2026-03-06 01:43:11.670 [INFO][4180] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="accdbe9ed9cca34d2919a68e704ea3a155d44a9880502a909f36c453aed9c339" Namespace="calico-system" Pod="csi-node-driver-cv4hv" WorkloadEndpoint="localhost-k8s-csi--node--driver--cv4hv-eth0" Mar 6 01:43:11.788720 containerd[1476]: 2026-03-06 01:43:11.685 [INFO][4180] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="accdbe9ed9cca34d2919a68e704ea3a155d44a9880502a909f36c453aed9c339" Namespace="calico-system" Pod="csi-node-driver-cv4hv" WorkloadEndpoint="localhost-k8s-csi--node--driver--cv4hv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--cv4hv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2ad6f57d-dd1e-42e4-a3fb-1ed523c0dea6", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 42, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"accdbe9ed9cca34d2919a68e704ea3a155d44a9880502a909f36c453aed9c339", Pod:"csi-node-driver-cv4hv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia4a08b6859a", MAC:"02:ac:45:e6:06:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:43:11.788720 containerd[1476]: 2026-03-06 01:43:11.754 [INFO][4180] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="accdbe9ed9cca34d2919a68e704ea3a155d44a9880502a909f36c453aed9c339" Namespace="calico-system" Pod="csi-node-driver-cv4hv" WorkloadEndpoint="localhost-k8s-csi--node--driver--cv4hv-eth0" Mar 6 01:43:11.814141 containerd[1476]: time="2026-03-06T01:43:11.814052433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-5c9lf,Uid:763ef55a-80e2-49cd-8289-67ad48649a32,Namespace:calico-system,Attempt:0,} returns sandbox id \"94c17a636f3c2da625c9402373ac06f6ac907f8a99d7e9878f9e972616bb6265\"" Mar 6 01:43:11.854226 containerd[1476]: time="2026-03-06T01:43:11.853948990Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:43:11.854226 containerd[1476]: time="2026-03-06T01:43:11.854093073Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:43:11.854226 containerd[1476]: time="2026-03-06T01:43:11.854110456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:43:11.856702 containerd[1476]: time="2026-03-06T01:43:11.854221867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:43:11.868111 kubelet[2575]: I0306 01:43:11.868037 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/1f99c1bc-0e67-404c-8a16-83d04581bbea-nginx-config\") pod \"whisker-77c4c75f9f-bv8h8\" (UID: \"1f99c1bc-0e67-404c-8a16-83d04581bbea\") " pod="calico-system/whisker-77c4c75f9f-bv8h8" Mar 6 01:43:11.868248 kubelet[2575]: I0306 01:43:11.868116 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vq6q8\" (UniqueName: \"kubernetes.io/projected/1f99c1bc-0e67-404c-8a16-83d04581bbea-kube-api-access-vq6q8\") pod \"whisker-77c4c75f9f-bv8h8\" (UID: \"1f99c1bc-0e67-404c-8a16-83d04581bbea\") " pod="calico-system/whisker-77c4c75f9f-bv8h8" Mar 6 01:43:11.868248 kubelet[2575]: I0306 01:43:11.868155 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f99c1bc-0e67-404c-8a16-83d04581bbea-whisker-ca-bundle\") pod \"whisker-77c4c75f9f-bv8h8\" (UID: \"1f99c1bc-0e67-404c-8a16-83d04581bbea\") " pod="calico-system/whisker-77c4c75f9f-bv8h8" Mar 6 01:43:11.868248 kubelet[2575]: I0306 01:43:11.868175 2575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1f99c1bc-0e67-404c-8a16-83d04581bbea-whisker-backend-key-pair\") pod \"whisker-77c4c75f9f-bv8h8\" (UID: \"1f99c1bc-0e67-404c-8a16-83d04581bbea\") " pod="calico-system/whisker-77c4c75f9f-bv8h8" Mar 6 01:43:11.914834 systemd-networkd[1413]: calicc303200f50: Gained IPv6LL Mar 6 01:43:11.920975 systemd[1]: Started cri-containerd-accdbe9ed9cca34d2919a68e704ea3a155d44a9880502a909f36c453aed9c339.scope - libcontainer container accdbe9ed9cca34d2919a68e704ea3a155d44a9880502a909f36c453aed9c339. Mar 6 01:43:11.971334 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 01:43:12.012555 containerd[1476]: time="2026-03-06T01:43:12.011840232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cv4hv,Uid:2ad6f57d-dd1e-42e4-a3fb-1ed523c0dea6,Namespace:calico-system,Attempt:0,} returns sandbox id \"accdbe9ed9cca34d2919a68e704ea3a155d44a9880502a909f36c453aed9c339\"" Mar 6 01:43:12.071246 containerd[1476]: time="2026-03-06T01:43:12.071168695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77c4c75f9f-bv8h8,Uid:1f99c1bc-0e67-404c-8a16-83d04581bbea,Namespace:calico-system,Attempt:0,}" Mar 6 01:43:12.298379 systemd-networkd[1413]: cali61e94dda459: Gained IPv6LL Mar 6 01:43:12.308818 systemd-networkd[1413]: cali76c2926082e: Link UP Mar 6 01:43:12.309162 systemd-networkd[1413]: cali76c2926082e: Gained carrier Mar 6 01:43:12.331987 containerd[1476]: 2026-03-06 01:43:12.161 [ERROR][4552] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 6 01:43:12.331987 containerd[1476]: 2026-03-06 01:43:12.180 [INFO][4552] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--77c4c75f9f--bv8h8-eth0 whisker-77c4c75f9f- calico-system 1f99c1bc-0e67-404c-8a16-83d04581bbea 1024 0 2026-03-06 01:43:11 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:77c4c75f9f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-77c4c75f9f-bv8h8 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali76c2926082e [] [] }} ContainerID="f6baf5b3d6288f5b582b714437bdf825dbbd6e0d1a4ecff8e6a5c8803418f574" Namespace="calico-system" Pod="whisker-77c4c75f9f-bv8h8" WorkloadEndpoint="localhost-k8s-whisker--77c4c75f9f--bv8h8-" Mar 6 01:43:12.331987 containerd[1476]: 2026-03-06 01:43:12.180 [INFO][4552] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f6baf5b3d6288f5b582b714437bdf825dbbd6e0d1a4ecff8e6a5c8803418f574" Namespace="calico-system" Pod="whisker-77c4c75f9f-bv8h8" WorkloadEndpoint="localhost-k8s-whisker--77c4c75f9f--bv8h8-eth0" Mar 6 01:43:12.331987 containerd[1476]: 2026-03-06 01:43:12.237 [INFO][4586] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f6baf5b3d6288f5b582b714437bdf825dbbd6e0d1a4ecff8e6a5c8803418f574" HandleID="k8s-pod-network.f6baf5b3d6288f5b582b714437bdf825dbbd6e0d1a4ecff8e6a5c8803418f574" Workload="localhost-k8s-whisker--77c4c75f9f--bv8h8-eth0" Mar 6 01:43:12.331987 containerd[1476]: 2026-03-06 01:43:12.249 [INFO][4586] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f6baf5b3d6288f5b582b714437bdf825dbbd6e0d1a4ecff8e6a5c8803418f574" HandleID="k8s-pod-network.f6baf5b3d6288f5b582b714437bdf825dbbd6e0d1a4ecff8e6a5c8803418f574" Workload="localhost-k8s-whisker--77c4c75f9f--bv8h8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fc3b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-77c4c75f9f-bv8h8", "timestamp":"2026-03-06 01:43:12.237187022 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00063a6e0)} Mar 6 01:43:12.331987 containerd[1476]: 2026-03-06 01:43:12.249 [INFO][4586] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:43:12.331987 containerd[1476]: 2026-03-06 01:43:12.249 [INFO][4586] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:43:12.331987 containerd[1476]: 2026-03-06 01:43:12.250 [INFO][4586] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 6 01:43:12.331987 containerd[1476]: 2026-03-06 01:43:12.254 [INFO][4586] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f6baf5b3d6288f5b582b714437bdf825dbbd6e0d1a4ecff8e6a5c8803418f574" host="localhost" Mar 6 01:43:12.331987 containerd[1476]: 2026-03-06 01:43:12.262 [INFO][4586] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 6 01:43:12.331987 containerd[1476]: 2026-03-06 01:43:12.271 [INFO][4586] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 6 01:43:12.331987 containerd[1476]: 2026-03-06 01:43:12.273 [INFO][4586] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 6 01:43:12.331987 containerd[1476]: 2026-03-06 01:43:12.277 [INFO][4586] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 6 01:43:12.331987 containerd[1476]: 2026-03-06 01:43:12.277 [INFO][4586] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f6baf5b3d6288f5b582b714437bdf825dbbd6e0d1a4ecff8e6a5c8803418f574" host="localhost" Mar 6 01:43:12.331987 containerd[1476]: 2026-03-06 01:43:12.280 [INFO][4586] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f6baf5b3d6288f5b582b714437bdf825dbbd6e0d1a4ecff8e6a5c8803418f574 Mar 6 01:43:12.331987 containerd[1476]: 2026-03-06 01:43:12.286 [INFO][4586] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f6baf5b3d6288f5b582b714437bdf825dbbd6e0d1a4ecff8e6a5c8803418f574" host="localhost" Mar 6 01:43:12.331987 containerd[1476]: 2026-03-06 01:43:12.294 [INFO][4586] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.f6baf5b3d6288f5b582b714437bdf825dbbd6e0d1a4ecff8e6a5c8803418f574" host="localhost" Mar 6 01:43:12.331987 containerd[1476]: 2026-03-06 01:43:12.294 [INFO][4586] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.f6baf5b3d6288f5b582b714437bdf825dbbd6e0d1a4ecff8e6a5c8803418f574" host="localhost" Mar 6 01:43:12.331987 containerd[1476]: 2026-03-06 01:43:12.294 [INFO][4586] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:43:12.331987 containerd[1476]: 2026-03-06 01:43:12.294 [INFO][4586] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="f6baf5b3d6288f5b582b714437bdf825dbbd6e0d1a4ecff8e6a5c8803418f574" HandleID="k8s-pod-network.f6baf5b3d6288f5b582b714437bdf825dbbd6e0d1a4ecff8e6a5c8803418f574" Workload="localhost-k8s-whisker--77c4c75f9f--bv8h8-eth0" Mar 6 01:43:12.333841 containerd[1476]: 2026-03-06 01:43:12.299 [INFO][4552] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f6baf5b3d6288f5b582b714437bdf825dbbd6e0d1a4ecff8e6a5c8803418f574" Namespace="calico-system" Pod="whisker-77c4c75f9f-bv8h8" WorkloadEndpoint="localhost-k8s-whisker--77c4c75f9f--bv8h8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--77c4c75f9f--bv8h8-eth0", GenerateName:"whisker-77c4c75f9f-", Namespace:"calico-system", SelfLink:"", UID:"1f99c1bc-0e67-404c-8a16-83d04581bbea", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 43, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"77c4c75f9f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-77c4c75f9f-bv8h8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali76c2926082e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:43:12.333841 containerd[1476]: 2026-03-06 01:43:12.299 [INFO][4552] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="f6baf5b3d6288f5b582b714437bdf825dbbd6e0d1a4ecff8e6a5c8803418f574" Namespace="calico-system" Pod="whisker-77c4c75f9f-bv8h8" WorkloadEndpoint="localhost-k8s-whisker--77c4c75f9f--bv8h8-eth0" Mar 6 01:43:12.333841 containerd[1476]: 2026-03-06 01:43:12.299 [INFO][4552] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali76c2926082e ContainerID="f6baf5b3d6288f5b582b714437bdf825dbbd6e0d1a4ecff8e6a5c8803418f574" Namespace="calico-system" Pod="whisker-77c4c75f9f-bv8h8" WorkloadEndpoint="localhost-k8s-whisker--77c4c75f9f--bv8h8-eth0" Mar 6 01:43:12.333841 containerd[1476]: 2026-03-06 01:43:12.307 [INFO][4552] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f6baf5b3d6288f5b582b714437bdf825dbbd6e0d1a4ecff8e6a5c8803418f574" Namespace="calico-system" Pod="whisker-77c4c75f9f-bv8h8" WorkloadEndpoint="localhost-k8s-whisker--77c4c75f9f--bv8h8-eth0" Mar 6 01:43:12.333841 containerd[1476]: 2026-03-06 01:43:12.308 [INFO][4552] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f6baf5b3d6288f5b582b714437bdf825dbbd6e0d1a4ecff8e6a5c8803418f574" Namespace="calico-system" Pod="whisker-77c4c75f9f-bv8h8" WorkloadEndpoint="localhost-k8s-whisker--77c4c75f9f--bv8h8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--77c4c75f9f--bv8h8-eth0", GenerateName:"whisker-77c4c75f9f-", Namespace:"calico-system", SelfLink:"", UID:"1f99c1bc-0e67-404c-8a16-83d04581bbea", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 43, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"77c4c75f9f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f6baf5b3d6288f5b582b714437bdf825dbbd6e0d1a4ecff8e6a5c8803418f574", Pod:"whisker-77c4c75f9f-bv8h8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali76c2926082e", MAC:"6a:d9:50:2b:44:0b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:43:12.333841 containerd[1476]: 2026-03-06 01:43:12.326 [INFO][4552] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f6baf5b3d6288f5b582b714437bdf825dbbd6e0d1a4ecff8e6a5c8803418f574" Namespace="calico-system" Pod="whisker-77c4c75f9f-bv8h8" WorkloadEndpoint="localhost-k8s-whisker--77c4c75f9f--bv8h8-eth0" Mar 6 01:43:12.347615 kernel: calico-node[4554]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 6 01:43:12.371222 containerd[1476]: time="2026-03-06T01:43:12.371094542Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:43:12.371775 containerd[1476]: time="2026-03-06T01:43:12.371195274Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:43:12.371775 containerd[1476]: time="2026-03-06T01:43:12.371215571Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:43:12.373031 containerd[1476]: time="2026-03-06T01:43:12.372503387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:43:12.433805 systemd[1]: Started cri-containerd-f6baf5b3d6288f5b582b714437bdf825dbbd6e0d1a4ecff8e6a5c8803418f574.scope - libcontainer container f6baf5b3d6288f5b582b714437bdf825dbbd6e0d1a4ecff8e6a5c8803418f574. Mar 6 01:43:12.475101 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 01:43:12.552161 containerd[1476]: time="2026-03-06T01:43:12.551949667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77c4c75f9f-bv8h8,Uid:1f99c1bc-0e67-404c-8a16-83d04581bbea,Namespace:calico-system,Attempt:0,} returns sandbox id \"f6baf5b3d6288f5b582b714437bdf825dbbd6e0d1a4ecff8e6a5c8803418f574\"" Mar 6 01:43:12.627376 kubelet[2575]: E0306 01:43:12.627289 2575 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:43:12.629224 kubelet[2575]: E0306 01:43:12.629146 2575 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:43:12.681884 systemd-networkd[1413]: cali6d3e954de4c: Gained IPv6LL Mar 6 01:43:12.683661 systemd-networkd[1413]: cali5eeb73fccf4: Gained IPv6LL Mar 6 01:43:13.002998 systemd-networkd[1413]: calic70e484312f: Gained IPv6LL Mar 6 01:43:13.138842 systemd-networkd[1413]: vxlan.calico: Link UP Mar 6 01:43:13.138855 systemd-networkd[1413]: vxlan.calico: Gained carrier Mar 6 01:43:13.159765 kubelet[2575]: I0306 01:43:13.159596 2575 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ed20f622-f835-462f-a791-b755c458eb23" path="/var/lib/kubelet/pods/ed20f622-f835-462f-a791-b755c458eb23/volumes" Mar 6 01:43:13.267479 containerd[1476]: time="2026-03-06T01:43:13.267273387Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:13.269119 containerd[1476]: time="2026-03-06T01:43:13.268724182Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 6 01:43:13.270894 containerd[1476]: time="2026-03-06T01:43:13.270872067Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:13.282974 containerd[1476]: time="2026-03-06T01:43:13.282878139Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:13.284843 containerd[1476]: time="2026-03-06T01:43:13.284725845Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 2.801282405s" Mar 6 01:43:13.284843 containerd[1476]: time="2026-03-06T01:43:13.284806478Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 6 01:43:13.291848 containerd[1476]: time="2026-03-06T01:43:13.291776499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 6 01:43:13.323758 containerd[1476]: time="2026-03-06T01:43:13.323683053Z" level=info msg="CreateContainer within sandbox \"717128c158d2f0a8808a8b70f6837a0626c525e3d81576300f9d1cc1d863b884\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 6 01:43:13.404349 containerd[1476]: time="2026-03-06T01:43:13.404250490Z" level=info msg="CreateContainer within sandbox \"717128c158d2f0a8808a8b70f6837a0626c525e3d81576300f9d1cc1d863b884\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"618f6f23aa824ac4dcdbebbf47a8d991ff2fb6d3fc89b4eb72e231e2bef642dd\"" Mar 6 01:43:13.407005 containerd[1476]: time="2026-03-06T01:43:13.406453440Z" level=info msg="StartContainer for \"618f6f23aa824ac4dcdbebbf47a8d991ff2fb6d3fc89b4eb72e231e2bef642dd\"" Mar 6 01:43:13.456427 systemd[1]: Started cri-containerd-618f6f23aa824ac4dcdbebbf47a8d991ff2fb6d3fc89b4eb72e231e2bef642dd.scope - libcontainer container 618f6f23aa824ac4dcdbebbf47a8d991ff2fb6d3fc89b4eb72e231e2bef642dd. Mar 6 01:43:13.535911 containerd[1476]: time="2026-03-06T01:43:13.535793994Z" level=info msg="StartContainer for \"618f6f23aa824ac4dcdbebbf47a8d991ff2fb6d3fc89b4eb72e231e2bef642dd\" returns successfully" Mar 6 01:43:13.633939 kubelet[2575]: E0306 01:43:13.633871 2575 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:43:13.634305 kubelet[2575]: E0306 01:43:13.634268 2575 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:43:13.642843 systemd-networkd[1413]: calia4a08b6859a: Gained IPv6LL Mar 6 01:43:13.651739 kubelet[2575]: I0306 01:43:13.651501 2575 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5cfb8f57f7-pz9ts" podStartSLOduration=15.845702157 podStartE2EDuration="18.651474583s" podCreationTimestamp="2026-03-06 01:42:55 +0000 UTC" firstStartedPulling="2026-03-06 01:43:10.483033993 +0000 UTC m=+36.019638027" lastFinishedPulling="2026-03-06 01:43:13.288806418 +0000 UTC m=+38.825410453" observedRunningTime="2026-03-06 01:43:13.650842847 +0000 UTC m=+39.187446882" watchObservedRunningTime="2026-03-06 01:43:13.651474583 +0000 UTC m=+39.188078618" Mar 6 01:43:14.153988 systemd-networkd[1413]: cali76c2926082e: Gained IPv6LL Mar 6 01:43:14.409795 systemd-networkd[1413]: vxlan.calico: Gained IPv6LL Mar 6 01:43:14.782043 containerd[1476]: time="2026-03-06T01:43:14.781950044Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:14.782974 containerd[1476]: time="2026-03-06T01:43:14.782911283Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 6 01:43:14.784206 containerd[1476]: time="2026-03-06T01:43:14.784141284Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:14.787979 containerd[1476]: time="2026-03-06T01:43:14.787912884Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:14.788950 containerd[1476]: time="2026-03-06T01:43:14.788876681Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 1.497028236s" Mar 6 01:43:14.788950 containerd[1476]: time="2026-03-06T01:43:14.788933478Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 6 01:43:14.790686 containerd[1476]: time="2026-03-06T01:43:14.790410376Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 6 01:43:14.795467 containerd[1476]: time="2026-03-06T01:43:14.795396851Z" level=info msg="CreateContainer within sandbox \"c6f1b93739de14d3927b5585dc7a2047dab1b017b43ed738f5fec0c5386fc88c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 6 01:43:14.811659 containerd[1476]: time="2026-03-06T01:43:14.811625434Z" level=info msg="CreateContainer within sandbox \"c6f1b93739de14d3927b5585dc7a2047dab1b017b43ed738f5fec0c5386fc88c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"49455cfaa93229c7d7ea514072973fabeec6fab5004244781b6993dcf30d1eec\"" Mar 6 01:43:14.812746 containerd[1476]: time="2026-03-06T01:43:14.812451514Z" level=info msg="StartContainer for \"49455cfaa93229c7d7ea514072973fabeec6fab5004244781b6993dcf30d1eec\"" Mar 6 01:43:14.851751 systemd[1]: Started cri-containerd-49455cfaa93229c7d7ea514072973fabeec6fab5004244781b6993dcf30d1eec.scope - libcontainer container 49455cfaa93229c7d7ea514072973fabeec6fab5004244781b6993dcf30d1eec. Mar 6 01:43:14.886160 containerd[1476]: time="2026-03-06T01:43:14.885148580Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:14.887588 containerd[1476]: time="2026-03-06T01:43:14.887476408Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 6 01:43:14.890947 containerd[1476]: time="2026-03-06T01:43:14.890885073Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 100.434661ms" Mar 6 01:43:14.891121 containerd[1476]: time="2026-03-06T01:43:14.891100379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 6 01:43:14.894970 containerd[1476]: time="2026-03-06T01:43:14.894896448Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 6 01:43:14.900415 containerd[1476]: time="2026-03-06T01:43:14.900337681Z" level=info msg="CreateContainer within sandbox \"fb3923822e8f9fc49d249401d7620829cb40fa27d955470e3a2a699107cc29f2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 6 01:43:14.902006 containerd[1476]: time="2026-03-06T01:43:14.901895037Z" level=info msg="StartContainer for \"49455cfaa93229c7d7ea514072973fabeec6fab5004244781b6993dcf30d1eec\" returns successfully" Mar 6 01:43:14.929980 containerd[1476]: time="2026-03-06T01:43:14.929890067Z" level=info msg="CreateContainer within sandbox \"fb3923822e8f9fc49d249401d7620829cb40fa27d955470e3a2a699107cc29f2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ad0ba67e30846a147bae818d90a3488c8985067c1975b3614098ce50e8578fe8\"" Mar 6 01:43:14.932162 containerd[1476]: time="2026-03-06T01:43:14.931026220Z" level=info msg="StartContainer for \"ad0ba67e30846a147bae818d90a3488c8985067c1975b3614098ce50e8578fe8\"" Mar 6 01:43:14.981766 systemd[1]: Started cri-containerd-ad0ba67e30846a147bae818d90a3488c8985067c1975b3614098ce50e8578fe8.scope - libcontainer container ad0ba67e30846a147bae818d90a3488c8985067c1975b3614098ce50e8578fe8. Mar 6 01:43:15.069418 containerd[1476]: time="2026-03-06T01:43:15.069053013Z" level=info msg="StartContainer for \"ad0ba67e30846a147bae818d90a3488c8985067c1975b3614098ce50e8578fe8\" returns successfully" Mar 6 01:43:15.666892 kubelet[2575]: I0306 01:43:15.666807 2575 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-769b589d67-dgxvb" podStartSLOduration=21.405649509 podStartE2EDuration="24.666783592s" podCreationTimestamp="2026-03-06 01:42:51 +0000 UTC" firstStartedPulling="2026-03-06 01:43:11.529087826 +0000 UTC m=+37.065691860" lastFinishedPulling="2026-03-06 01:43:14.790221908 +0000 UTC m=+40.326825943" observedRunningTime="2026-03-06 01:43:15.663915568 +0000 UTC m=+41.200519604" watchObservedRunningTime="2026-03-06 01:43:15.666783592 +0000 UTC m=+41.203387647" Mar 6 01:43:16.448263 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1189115745.mount: Deactivated successfully. Mar 6 01:43:16.662632 kubelet[2575]: I0306 01:43:16.662568 2575 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 6 01:43:16.669699 kubelet[2575]: I0306 01:43:16.665168 2575 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 6 01:43:17.124352 containerd[1476]: time="2026-03-06T01:43:17.124110225Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:17.126005 containerd[1476]: time="2026-03-06T01:43:17.125921520Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 6 01:43:17.127406 containerd[1476]: time="2026-03-06T01:43:17.127341376Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:17.130214 containerd[1476]: time="2026-03-06T01:43:17.130107064Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:17.131709 containerd[1476]: time="2026-03-06T01:43:17.131645625Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 2.236675199s" Mar 6 01:43:17.131709 containerd[1476]: time="2026-03-06T01:43:17.131703891Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 6 01:43:17.133913 containerd[1476]: time="2026-03-06T01:43:17.133869316Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 6 01:43:17.139389 containerd[1476]: time="2026-03-06T01:43:17.139341680Z" level=info msg="CreateContainer within sandbox \"94c17a636f3c2da625c9402373ac06f6ac907f8a99d7e9878f9e972616bb6265\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 6 01:43:17.169749 containerd[1476]: time="2026-03-06T01:43:17.169704054Z" level=info msg="CreateContainer within sandbox \"94c17a636f3c2da625c9402373ac06f6ac907f8a99d7e9878f9e972616bb6265\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"ad44d982140c0373be63b628f1b47ad42e62b99da152b0cdb7f353be27e93d6d\"" Mar 6 01:43:17.172510 containerd[1476]: time="2026-03-06T01:43:17.170907683Z" level=info msg="StartContainer for \"ad44d982140c0373be63b628f1b47ad42e62b99da152b0cdb7f353be27e93d6d\"" Mar 6 01:43:17.277743 systemd[1]: Started cri-containerd-ad44d982140c0373be63b628f1b47ad42e62b99da152b0cdb7f353be27e93d6d.scope - libcontainer container ad44d982140c0373be63b628f1b47ad42e62b99da152b0cdb7f353be27e93d6d. Mar 6 01:43:17.329069 containerd[1476]: time="2026-03-06T01:43:17.328889394Z" level=info msg="StartContainer for \"ad44d982140c0373be63b628f1b47ad42e62b99da152b0cdb7f353be27e93d6d\" returns successfully" Mar 6 01:43:17.729367 kubelet[2575]: I0306 01:43:17.728708 2575 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-9f7667bb8-5c9lf" podStartSLOduration=21.411715099 podStartE2EDuration="26.728689076s" podCreationTimestamp="2026-03-06 01:42:51 +0000 UTC" firstStartedPulling="2026-03-06 01:43:11.816085911 +0000 UTC m=+37.352689947" lastFinishedPulling="2026-03-06 01:43:17.133059889 +0000 UTC m=+42.669663924" observedRunningTime="2026-03-06 01:43:17.725957936 +0000 UTC m=+43.262561991" watchObservedRunningTime="2026-03-06 01:43:17.728689076 +0000 UTC m=+43.265293112" Mar 6 01:43:17.729367 kubelet[2575]: I0306 01:43:17.728953 2575 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-769b589d67-kcwqc" podStartSLOduration=23.405524453 podStartE2EDuration="26.728946898s" podCreationTimestamp="2026-03-06 01:42:51 +0000 UTC" firstStartedPulling="2026-03-06 01:43:11.570164432 +0000 UTC m=+37.106768466" lastFinishedPulling="2026-03-06 01:43:14.893586866 +0000 UTC m=+40.430190911" observedRunningTime="2026-03-06 01:43:15.686047591 +0000 UTC m=+41.222651626" watchObservedRunningTime="2026-03-06 01:43:17.728946898 +0000 UTC m=+43.265550932" Mar 6 01:43:17.777328 systemd[1]: run-containerd-runc-k8s.io-ad44d982140c0373be63b628f1b47ad42e62b99da152b0cdb7f353be27e93d6d-runc.GtS8sb.mount: Deactivated successfully. Mar 6 01:43:17.911060 containerd[1476]: time="2026-03-06T01:43:17.910980185Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:17.912088 containerd[1476]: time="2026-03-06T01:43:17.911992534Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 6 01:43:17.913297 containerd[1476]: time="2026-03-06T01:43:17.913166914Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:17.919002 containerd[1476]: time="2026-03-06T01:43:17.918929662Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:17.919717 containerd[1476]: time="2026-03-06T01:43:17.919663397Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 785.762384ms" Mar 6 01:43:17.919717 containerd[1476]: time="2026-03-06T01:43:17.919710414Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 6 01:43:17.920916 containerd[1476]: time="2026-03-06T01:43:17.920878814Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 6 01:43:17.924981 containerd[1476]: time="2026-03-06T01:43:17.924929705Z" level=info msg="CreateContainer within sandbox \"accdbe9ed9cca34d2919a68e704ea3a155d44a9880502a909f36c453aed9c339\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 6 01:43:17.966502 containerd[1476]: time="2026-03-06T01:43:17.966370566Z" level=info msg="CreateContainer within sandbox \"accdbe9ed9cca34d2919a68e704ea3a155d44a9880502a909f36c453aed9c339\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"6c7b61e304d1d48de7713219d86c0e62bdebf1695bb11777e1bb5e4948596006\"" Mar 6 01:43:17.967657 containerd[1476]: time="2026-03-06T01:43:17.967500475Z" level=info msg="StartContainer for \"6c7b61e304d1d48de7713219d86c0e62bdebf1695bb11777e1bb5e4948596006\"" Mar 6 01:43:17.997698 systemd[1]: Started cri-containerd-6c7b61e304d1d48de7713219d86c0e62bdebf1695bb11777e1bb5e4948596006.scope - libcontainer container 6c7b61e304d1d48de7713219d86c0e62bdebf1695bb11777e1bb5e4948596006. Mar 6 01:43:18.045935 containerd[1476]: time="2026-03-06T01:43:18.045852007Z" level=info msg="StartContainer for \"6c7b61e304d1d48de7713219d86c0e62bdebf1695bb11777e1bb5e4948596006\" returns successfully" Mar 6 01:43:18.469439 containerd[1476]: time="2026-03-06T01:43:18.469345060Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:18.470299 containerd[1476]: time="2026-03-06T01:43:18.470163673Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 6 01:43:18.471872 containerd[1476]: time="2026-03-06T01:43:18.471827350Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:18.474962 containerd[1476]: time="2026-03-06T01:43:18.474875406Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:18.475769 containerd[1476]: time="2026-03-06T01:43:18.475732822Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 554.82206ms" Mar 6 01:43:18.475861 containerd[1476]: time="2026-03-06T01:43:18.475772675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 6 01:43:18.477983 containerd[1476]: time="2026-03-06T01:43:18.477754676Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 6 01:43:18.482418 containerd[1476]: time="2026-03-06T01:43:18.482355700Z" level=info msg="CreateContainer within sandbox \"f6baf5b3d6288f5b582b714437bdf825dbbd6e0d1a4ecff8e6a5c8803418f574\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 6 01:43:18.514180 containerd[1476]: time="2026-03-06T01:43:18.514056263Z" level=info msg="CreateContainer within sandbox \"f6baf5b3d6288f5b582b714437bdf825dbbd6e0d1a4ecff8e6a5c8803418f574\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"8c6d3e72c4db9adcb957d9d3bd18d541c8457fc39f3a3f807633c7a694c6e29f\"" Mar 6 01:43:18.514940 containerd[1476]: time="2026-03-06T01:43:18.514902642Z" level=info msg="StartContainer for \"8c6d3e72c4db9adcb957d9d3bd18d541c8457fc39f3a3f807633c7a694c6e29f\"" Mar 6 01:43:18.566807 systemd[1]: Started cri-containerd-8c6d3e72c4db9adcb957d9d3bd18d541c8457fc39f3a3f807633c7a694c6e29f.scope - libcontainer container 8c6d3e72c4db9adcb957d9d3bd18d541c8457fc39f3a3f807633c7a694c6e29f. Mar 6 01:43:18.618860 containerd[1476]: time="2026-03-06T01:43:18.618792381Z" level=info msg="StartContainer for \"8c6d3e72c4db9adcb957d9d3bd18d541c8457fc39f3a3f807633c7a694c6e29f\" returns successfully" Mar 6 01:43:19.977960 containerd[1476]: time="2026-03-06T01:43:19.977782538Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:19.980886 containerd[1476]: time="2026-03-06T01:43:19.980790000Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 6 01:43:19.983039 containerd[1476]: time="2026-03-06T01:43:19.982955174Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:19.986880 containerd[1476]: time="2026-03-06T01:43:19.986848173Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:19.988390 containerd[1476]: time="2026-03-06T01:43:19.988296340Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 1.510503894s" Mar 6 01:43:19.988390 containerd[1476]: time="2026-03-06T01:43:19.988354206Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 6 01:43:19.993764 containerd[1476]: time="2026-03-06T01:43:19.993703220Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 6 01:43:20.001823 containerd[1476]: time="2026-03-06T01:43:20.001344018Z" level=info msg="CreateContainer within sandbox \"accdbe9ed9cca34d2919a68e704ea3a155d44a9880502a909f36c453aed9c339\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 6 01:43:20.037092 containerd[1476]: time="2026-03-06T01:43:20.036961564Z" level=info msg="CreateContainer within sandbox \"accdbe9ed9cca34d2919a68e704ea3a155d44a9880502a909f36c453aed9c339\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"f48d2faca8f50f6225d5b6062aa8a9e23fcb8b32d4cdde74c4b3916b66d34100\"" Mar 6 01:43:20.038769 containerd[1476]: time="2026-03-06T01:43:20.037699746Z" level=info msg="StartContainer for \"f48d2faca8f50f6225d5b6062aa8a9e23fcb8b32d4cdde74c4b3916b66d34100\"" Mar 6 01:43:20.137770 systemd[1]: Started cri-containerd-f48d2faca8f50f6225d5b6062aa8a9e23fcb8b32d4cdde74c4b3916b66d34100.scope - libcontainer container f48d2faca8f50f6225d5b6062aa8a9e23fcb8b32d4cdde74c4b3916b66d34100. Mar 6 01:43:20.193467 containerd[1476]: time="2026-03-06T01:43:20.193370162Z" level=info msg="StartContainer for \"f48d2faca8f50f6225d5b6062aa8a9e23fcb8b32d4cdde74c4b3916b66d34100\" returns successfully" Mar 6 01:43:20.734878 kubelet[2575]: I0306 01:43:20.733134 2575 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 6 01:43:20.734878 kubelet[2575]: I0306 01:43:20.733211 2575 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 6 01:43:20.795216 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4196452738.mount: Deactivated successfully. Mar 6 01:43:20.822690 containerd[1476]: time="2026-03-06T01:43:20.822473562Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:20.824004 containerd[1476]: time="2026-03-06T01:43:20.823929016Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 6 01:43:20.825709 containerd[1476]: time="2026-03-06T01:43:20.825646383Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:20.828824 containerd[1476]: time="2026-03-06T01:43:20.828780174Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:43:20.829887 containerd[1476]: time="2026-03-06T01:43:20.829846263Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 836.066852ms" Mar 6 01:43:20.829939 containerd[1476]: time="2026-03-06T01:43:20.829898979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 6 01:43:20.835898 containerd[1476]: time="2026-03-06T01:43:20.835804103Z" level=info msg="CreateContainer within sandbox \"f6baf5b3d6288f5b582b714437bdf825dbbd6e0d1a4ecff8e6a5c8803418f574\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 6 01:43:20.852384 containerd[1476]: time="2026-03-06T01:43:20.852333586Z" level=info msg="CreateContainer within sandbox \"f6baf5b3d6288f5b582b714437bdf825dbbd6e0d1a4ecff8e6a5c8803418f574\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"2f24b05b88dc2f88ad16f7ad695ede83e70323b0aeee3711a3d9d4072fc97d8e\"" Mar 6 01:43:20.853068 containerd[1476]: time="2026-03-06T01:43:20.853023880Z" level=info msg="StartContainer for \"2f24b05b88dc2f88ad16f7ad695ede83e70323b0aeee3711a3d9d4072fc97d8e\"" Mar 6 01:43:20.904761 systemd[1]: Started cri-containerd-2f24b05b88dc2f88ad16f7ad695ede83e70323b0aeee3711a3d9d4072fc97d8e.scope - libcontainer container 2f24b05b88dc2f88ad16f7ad695ede83e70323b0aeee3711a3d9d4072fc97d8e. Mar 6 01:43:20.968017 containerd[1476]: time="2026-03-06T01:43:20.967949361Z" level=info msg="StartContainer for \"2f24b05b88dc2f88ad16f7ad695ede83e70323b0aeee3711a3d9d4072fc97d8e\" returns successfully" Mar 6 01:43:21.344042 systemd[1]: Started sshd@9-10.0.0.92:22-10.0.0.1:33854.service - OpenSSH per-connection server daemon (10.0.0.1:33854). Mar 6 01:43:21.402797 sshd[5213]: Accepted publickey for core from 10.0.0.1 port 33854 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:43:21.406932 sshd[5213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:43:21.412581 systemd-logind[1456]: New session 10 of user core. Mar 6 01:43:21.420775 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 6 01:43:21.748348 kubelet[2575]: I0306 01:43:21.748118 2575 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-77c4c75f9f-bv8h8" podStartSLOduration=2.473816375 podStartE2EDuration="10.748098687s" podCreationTimestamp="2026-03-06 01:43:11 +0000 UTC" firstStartedPulling="2026-03-06 01:43:12.556442777 +0000 UTC m=+38.093046811" lastFinishedPulling="2026-03-06 01:43:20.830725087 +0000 UTC m=+46.367329123" observedRunningTime="2026-03-06 01:43:21.747103252 +0000 UTC m=+47.283707287" watchObservedRunningTime="2026-03-06 01:43:21.748098687 +0000 UTC m=+47.284702723" Mar 6 01:43:21.749186 kubelet[2575]: I0306 01:43:21.748573 2575 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-cv4hv" podStartSLOduration=19.774726293 podStartE2EDuration="27.748507246s" podCreationTimestamp="2026-03-06 01:42:54 +0000 UTC" firstStartedPulling="2026-03-06 01:43:12.016204632 +0000 UTC m=+37.552808677" lastFinishedPulling="2026-03-06 01:43:19.989985595 +0000 UTC m=+45.526589630" observedRunningTime="2026-03-06 01:43:20.741663263 +0000 UTC m=+46.278267297" watchObservedRunningTime="2026-03-06 01:43:21.748507246 +0000 UTC m=+47.285111291" Mar 6 01:43:21.950017 sshd[5213]: pam_unix(sshd:session): session closed for user core Mar 6 01:43:21.956026 systemd[1]: sshd@9-10.0.0.92:22-10.0.0.1:33854.service: Deactivated successfully. Mar 6 01:43:21.958665 systemd[1]: session-10.scope: Deactivated successfully. Mar 6 01:43:21.959895 systemd-logind[1456]: Session 10 logged out. Waiting for processes to exit. Mar 6 01:43:21.963632 systemd-logind[1456]: Removed session 10. Mar 6 01:43:26.968897 systemd[1]: Started sshd@10-10.0.0.92:22-10.0.0.1:33860.service - OpenSSH per-connection server daemon (10.0.0.1:33860). Mar 6 01:43:27.175180 sshd[5263]: Accepted publickey for core from 10.0.0.1 port 33860 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:43:27.178180 sshd[5263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:43:27.184682 systemd-logind[1456]: New session 11 of user core. Mar 6 01:43:27.193850 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 6 01:43:27.368663 sshd[5263]: pam_unix(sshd:session): session closed for user core Mar 6 01:43:27.378757 systemd[1]: sshd@10-10.0.0.92:22-10.0.0.1:33860.service: Deactivated successfully. Mar 6 01:43:27.381960 systemd[1]: session-11.scope: Deactivated successfully. Mar 6 01:43:27.385938 systemd-logind[1456]: Session 11 logged out. Waiting for processes to exit. Mar 6 01:43:27.391502 systemd-logind[1456]: Removed session 11. Mar 6 01:43:32.388673 systemd[1]: Started sshd@11-10.0.0.92:22-10.0.0.1:36706.service - OpenSSH per-connection server daemon (10.0.0.1:36706). Mar 6 01:43:32.488932 sshd[5298]: Accepted publickey for core from 10.0.0.1 port 36706 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:43:32.491310 sshd[5298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:43:32.497853 systemd-logind[1456]: New session 12 of user core. Mar 6 01:43:32.513051 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 6 01:43:32.737674 sshd[5298]: pam_unix(sshd:session): session closed for user core Mar 6 01:43:32.742449 systemd[1]: sshd@11-10.0.0.92:22-10.0.0.1:36706.service: Deactivated successfully. Mar 6 01:43:32.745214 systemd[1]: session-12.scope: Deactivated successfully. Mar 6 01:43:32.748394 systemd-logind[1456]: Session 12 logged out. Waiting for processes to exit. Mar 6 01:43:32.750367 systemd-logind[1456]: Removed session 12. Mar 6 01:43:37.749608 systemd[1]: Started sshd@12-10.0.0.92:22-10.0.0.1:36712.service - OpenSSH per-connection server daemon (10.0.0.1:36712). Mar 6 01:43:37.810282 sshd[5333]: Accepted publickey for core from 10.0.0.1 port 36712 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:43:37.812427 sshd[5333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:43:37.820431 systemd-logind[1456]: New session 13 of user core. Mar 6 01:43:37.831811 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 6 01:43:37.979418 sshd[5333]: pam_unix(sshd:session): session closed for user core Mar 6 01:43:37.984241 systemd[1]: sshd@12-10.0.0.92:22-10.0.0.1:36712.service: Deactivated successfully. Mar 6 01:43:37.986288 systemd[1]: session-13.scope: Deactivated successfully. Mar 6 01:43:37.987473 systemd-logind[1456]: Session 13 logged out. Waiting for processes to exit. Mar 6 01:43:37.989008 systemd-logind[1456]: Removed session 13. Mar 6 01:43:43.006130 systemd[1]: Started sshd@13-10.0.0.92:22-10.0.0.1:48252.service - OpenSSH per-connection server daemon (10.0.0.1:48252). Mar 6 01:43:43.117889 sshd[5372]: Accepted publickey for core from 10.0.0.1 port 48252 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:43:43.120663 sshd[5372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:43:43.128772 systemd-logind[1456]: New session 14 of user core. Mar 6 01:43:43.135961 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 6 01:43:43.150748 kubelet[2575]: E0306 01:43:43.150707 2575 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:43:43.284891 kubelet[2575]: I0306 01:43:43.284825 2575 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 6 01:43:43.298716 sshd[5372]: pam_unix(sshd:session): session closed for user core Mar 6 01:43:43.303342 systemd[1]: sshd@13-10.0.0.92:22-10.0.0.1:48252.service: Deactivated successfully. Mar 6 01:43:43.306313 systemd[1]: session-14.scope: Deactivated successfully. Mar 6 01:43:43.308062 systemd-logind[1456]: Session 14 logged out. Waiting for processes to exit. Mar 6 01:43:43.310445 systemd-logind[1456]: Removed session 14. Mar 6 01:43:48.320751 systemd[1]: Started sshd@14-10.0.0.92:22-10.0.0.1:48262.service - OpenSSH per-connection server daemon (10.0.0.1:48262). Mar 6 01:43:48.391212 sshd[5438]: Accepted publickey for core from 10.0.0.1 port 48262 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:43:48.395725 sshd[5438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:43:48.406586 systemd-logind[1456]: New session 15 of user core. Mar 6 01:43:48.409862 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 6 01:43:48.662184 sshd[5438]: pam_unix(sshd:session): session closed for user core Mar 6 01:43:48.671473 systemd[1]: sshd@14-10.0.0.92:22-10.0.0.1:48262.service: Deactivated successfully. Mar 6 01:43:48.673805 systemd[1]: session-15.scope: Deactivated successfully. Mar 6 01:43:48.676851 systemd-logind[1456]: Session 15 logged out. Waiting for processes to exit. Mar 6 01:43:48.687228 systemd[1]: Started sshd@15-10.0.0.92:22-10.0.0.1:48268.service - OpenSSH per-connection server daemon (10.0.0.1:48268). Mar 6 01:43:48.690219 systemd-logind[1456]: Removed session 15. Mar 6 01:43:48.746068 sshd[5463]: Accepted publickey for core from 10.0.0.1 port 48268 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:43:48.747084 sshd[5463]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:43:48.755729 systemd-logind[1456]: New session 16 of user core. Mar 6 01:43:48.763025 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 6 01:43:48.992945 sshd[5463]: pam_unix(sshd:session): session closed for user core Mar 6 01:43:49.003873 systemd[1]: sshd@15-10.0.0.92:22-10.0.0.1:48268.service: Deactivated successfully. Mar 6 01:43:49.009252 systemd[1]: session-16.scope: Deactivated successfully. Mar 6 01:43:49.014001 systemd-logind[1456]: Session 16 logged out. Waiting for processes to exit. Mar 6 01:43:49.027851 systemd[1]: Started sshd@16-10.0.0.92:22-10.0.0.1:48272.service - OpenSSH per-connection server daemon (10.0.0.1:48272). Mar 6 01:43:49.029871 systemd-logind[1456]: Removed session 16. Mar 6 01:43:49.094867 sshd[5498]: Accepted publickey for core from 10.0.0.1 port 48272 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:43:49.096946 sshd[5498]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:43:49.103298 systemd-logind[1456]: New session 17 of user core. Mar 6 01:43:49.114866 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 6 01:43:49.289992 sshd[5498]: pam_unix(sshd:session): session closed for user core Mar 6 01:43:49.295322 systemd[1]: sshd@16-10.0.0.92:22-10.0.0.1:48272.service: Deactivated successfully. Mar 6 01:43:49.298186 systemd[1]: session-17.scope: Deactivated successfully. Mar 6 01:43:49.299612 systemd-logind[1456]: Session 17 logged out. Waiting for processes to exit. Mar 6 01:43:49.302084 systemd-logind[1456]: Removed session 17. Mar 6 01:43:54.303352 systemd[1]: Started sshd@17-10.0.0.92:22-10.0.0.1:43884.service - OpenSSH per-connection server daemon (10.0.0.1:43884). Mar 6 01:43:54.386835 sshd[5525]: Accepted publickey for core from 10.0.0.1 port 43884 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:43:54.390903 sshd[5525]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:43:54.398568 systemd-logind[1456]: New session 18 of user core. Mar 6 01:43:54.408098 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 6 01:43:54.517284 kubelet[2575]: I0306 01:43:54.517123 2575 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 6 01:43:54.604865 sshd[5525]: pam_unix(sshd:session): session closed for user core Mar 6 01:43:54.610051 systemd[1]: sshd@17-10.0.0.92:22-10.0.0.1:43884.service: Deactivated successfully. Mar 6 01:43:54.612967 systemd[1]: session-18.scope: Deactivated successfully. Mar 6 01:43:54.615208 systemd-logind[1456]: Session 18 logged out. Waiting for processes to exit. Mar 6 01:43:54.616978 systemd-logind[1456]: Removed session 18. Mar 6 01:43:59.647748 systemd[1]: Started sshd@18-10.0.0.92:22-10.0.0.1:43892.service - OpenSSH per-connection server daemon (10.0.0.1:43892). Mar 6 01:43:59.964621 sshd[5545]: Accepted publickey for core from 10.0.0.1 port 43892 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:43:59.972071 sshd[5545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:43:59.989005 systemd-logind[1456]: New session 19 of user core. Mar 6 01:43:59.994927 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 6 01:44:00.685480 sshd[5545]: pam_unix(sshd:session): session closed for user core Mar 6 01:44:00.696409 systemd[1]: sshd@18-10.0.0.92:22-10.0.0.1:43892.service: Deactivated successfully. Mar 6 01:44:00.700416 systemd[1]: session-19.scope: Deactivated successfully. Mar 6 01:44:00.705120 systemd-logind[1456]: Session 19 logged out. Waiting for processes to exit. Mar 6 01:44:00.706755 systemd-logind[1456]: Removed session 19. Mar 6 01:44:05.698307 systemd[1]: Started sshd@19-10.0.0.92:22-10.0.0.1:47410.service - OpenSSH per-connection server daemon (10.0.0.1:47410). Mar 6 01:44:05.745731 sshd[5560]: Accepted publickey for core from 10.0.0.1 port 47410 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:44:05.748125 sshd[5560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:44:05.753691 systemd-logind[1456]: New session 20 of user core. Mar 6 01:44:05.762900 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 6 01:44:05.913347 sshd[5560]: pam_unix(sshd:session): session closed for user core Mar 6 01:44:05.925224 systemd[1]: sshd@19-10.0.0.92:22-10.0.0.1:47410.service: Deactivated successfully. Mar 6 01:44:05.927234 systemd[1]: session-20.scope: Deactivated successfully. Mar 6 01:44:05.928966 systemd-logind[1456]: Session 20 logged out. Waiting for processes to exit. Mar 6 01:44:05.941942 systemd[1]: Started sshd@20-10.0.0.92:22-10.0.0.1:47418.service - OpenSSH per-connection server daemon (10.0.0.1:47418). Mar 6 01:44:05.943308 systemd-logind[1456]: Removed session 20. Mar 6 01:44:05.980371 sshd[5574]: Accepted publickey for core from 10.0.0.1 port 47418 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:44:05.982652 sshd[5574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:44:05.989882 systemd-logind[1456]: New session 21 of user core. Mar 6 01:44:06.006009 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 6 01:44:06.410429 sshd[5574]: pam_unix(sshd:session): session closed for user core Mar 6 01:44:06.444397 systemd[1]: Started sshd@21-10.0.0.92:22-10.0.0.1:47424.service - OpenSSH per-connection server daemon (10.0.0.1:47424). Mar 6 01:44:06.446704 systemd[1]: sshd@20-10.0.0.92:22-10.0.0.1:47418.service: Deactivated successfully. Mar 6 01:44:06.449962 systemd[1]: session-21.scope: Deactivated successfully. Mar 6 01:44:06.452008 systemd-logind[1456]: Session 21 logged out. Waiting for processes to exit. Mar 6 01:44:06.454147 systemd-logind[1456]: Removed session 21. Mar 6 01:44:06.511298 sshd[5584]: Accepted publickey for core from 10.0.0.1 port 47424 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:44:06.513685 sshd[5584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:44:06.521120 systemd-logind[1456]: New session 22 of user core. Mar 6 01:44:06.537073 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 6 01:44:07.479993 sshd[5584]: pam_unix(sshd:session): session closed for user core Mar 6 01:44:07.487487 systemd[1]: sshd@21-10.0.0.92:22-10.0.0.1:47424.service: Deactivated successfully. Mar 6 01:44:07.490168 systemd[1]: session-22.scope: Deactivated successfully. Mar 6 01:44:07.493219 systemd-logind[1456]: Session 22 logged out. Waiting for processes to exit. Mar 6 01:44:07.503092 systemd[1]: Started sshd@22-10.0.0.92:22-10.0.0.1:47438.service - OpenSSH per-connection server daemon (10.0.0.1:47438). Mar 6 01:44:07.504729 systemd-logind[1456]: Removed session 22. Mar 6 01:44:07.574736 sshd[5616]: Accepted publickey for core from 10.0.0.1 port 47438 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:44:07.577964 sshd[5616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:44:07.585318 systemd-logind[1456]: New session 23 of user core. Mar 6 01:44:07.592998 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 6 01:44:08.046724 sshd[5616]: pam_unix(sshd:session): session closed for user core Mar 6 01:44:08.059284 systemd[1]: sshd@22-10.0.0.92:22-10.0.0.1:47438.service: Deactivated successfully. Mar 6 01:44:08.066791 systemd[1]: session-23.scope: Deactivated successfully. Mar 6 01:44:08.071178 systemd-logind[1456]: Session 23 logged out. Waiting for processes to exit. Mar 6 01:44:08.080189 systemd[1]: Started sshd@23-10.0.0.92:22-10.0.0.1:47454.service - OpenSSH per-connection server daemon (10.0.0.1:47454). Mar 6 01:44:08.082660 systemd-logind[1456]: Removed session 23. Mar 6 01:44:08.118144 sshd[5629]: Accepted publickey for core from 10.0.0.1 port 47454 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:44:08.120454 sshd[5629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:44:08.127739 systemd-logind[1456]: New session 24 of user core. Mar 6 01:44:08.133899 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 6 01:44:08.276335 sshd[5629]: pam_unix(sshd:session): session closed for user core Mar 6 01:44:08.281925 systemd[1]: sshd@23-10.0.0.92:22-10.0.0.1:47454.service: Deactivated successfully. Mar 6 01:44:08.284220 systemd[1]: session-24.scope: Deactivated successfully. Mar 6 01:44:08.285499 systemd-logind[1456]: Session 24 logged out. Waiting for processes to exit. Mar 6 01:44:08.287281 systemd-logind[1456]: Removed session 24. Mar 6 01:44:10.158071 kubelet[2575]: E0306 01:44:10.158012 2575 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:11.151138 kubelet[2575]: E0306 01:44:11.151021 2575 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:11.652146 systemd[1]: run-containerd-runc-k8s.io-86a1042dca1558125bec0f11aef8bd7e94e2973be95c96f6309b86f6709fa9fd-runc.CxBcfW.mount: Deactivated successfully. Mar 6 01:44:13.300949 systemd[1]: Started sshd@24-10.0.0.92:22-10.0.0.1:34566.service - OpenSSH per-connection server daemon (10.0.0.1:34566). Mar 6 01:44:13.358370 sshd[5672]: Accepted publickey for core from 10.0.0.1 port 34566 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:44:13.361210 sshd[5672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:44:13.368490 systemd-logind[1456]: New session 25 of user core. Mar 6 01:44:13.378846 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 6 01:44:13.554265 sshd[5672]: pam_unix(sshd:session): session closed for user core Mar 6 01:44:13.558793 systemd[1]: sshd@24-10.0.0.92:22-10.0.0.1:34566.service: Deactivated successfully. Mar 6 01:44:13.561210 systemd[1]: session-25.scope: Deactivated successfully. Mar 6 01:44:13.562182 systemd-logind[1456]: Session 25 logged out. Waiting for processes to exit. Mar 6 01:44:13.563936 systemd-logind[1456]: Removed session 25. Mar 6 01:44:14.150191 kubelet[2575]: E0306 01:44:14.150108 2575 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:18.150821 kubelet[2575]: E0306 01:44:18.150746 2575 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:18.572043 systemd[1]: Started sshd@25-10.0.0.92:22-10.0.0.1:34572.service - OpenSSH per-connection server daemon (10.0.0.1:34572). Mar 6 01:44:18.622076 sshd[5710]: Accepted publickey for core from 10.0.0.1 port 34572 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:44:18.624513 sshd[5710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:44:18.632706 systemd-logind[1456]: New session 26 of user core. Mar 6 01:44:18.645870 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 6 01:44:18.939233 sshd[5710]: pam_unix(sshd:session): session closed for user core Mar 6 01:44:18.944763 systemd-logind[1456]: Session 26 logged out. Waiting for processes to exit. Mar 6 01:44:18.946146 systemd[1]: sshd@25-10.0.0.92:22-10.0.0.1:34572.service: Deactivated successfully. Mar 6 01:44:18.950983 systemd[1]: session-26.scope: Deactivated successfully. Mar 6 01:44:18.956683 systemd-logind[1456]: Removed session 26. Mar 6 01:44:23.951691 systemd[1]: Started sshd@26-10.0.0.92:22-10.0.0.1:42100.service - OpenSSH per-connection server daemon (10.0.0.1:42100). Mar 6 01:44:24.032964 sshd[5745]: Accepted publickey for core from 10.0.0.1 port 42100 ssh2: RSA SHA256:VNs8RziOHQ6y6bQCFMvMB7BrTMZ/MsZL/2tqqrbfoHw Mar 6 01:44:24.035719 sshd[5745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:44:24.042045 systemd-logind[1456]: New session 27 of user core. Mar 6 01:44:24.048712 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 6 01:44:24.291671 sshd[5745]: pam_unix(sshd:session): session closed for user core Mar 6 01:44:24.297238 systemd[1]: sshd@26-10.0.0.92:22-10.0.0.1:42100.service: Deactivated successfully. Mar 6 01:44:24.300242 systemd[1]: session-27.scope: Deactivated successfully. Mar 6 01:44:24.301591 systemd-logind[1456]: Session 27 logged out. Waiting for processes to exit. Mar 6 01:44:24.303246 systemd-logind[1456]: Removed session 27.