Mar 6 01:34:57.255309 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Mar 5 23:31:42 -00 2026 Mar 6 01:34:57.255332 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a6bcd99e714cc2f1b95dc0d61d9d762252de26a434f12074c16f59200c97ba9c Mar 6 01:34:57.255345 kernel: BIOS-provided physical RAM map: Mar 6 01:34:57.255351 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 6 01:34:57.255356 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 6 01:34:57.255362 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 6 01:34:57.255368 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 6 01:34:57.255374 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 6 01:34:57.255380 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 6 01:34:57.255388 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 6 01:34:57.255394 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 6 01:34:57.255400 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 6 01:34:57.255424 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 6 01:34:57.255430 kernel: NX (Execute Disable) protection: active Mar 6 01:34:57.255437 kernel: APIC: Static calls initialized Mar 6 01:34:57.255461 kernel: SMBIOS 2.8 present. Mar 6 01:34:57.255468 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 6 01:34:57.255474 kernel: Hypervisor detected: KVM Mar 6 01:34:57.255480 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 6 01:34:57.255486 kernel: kvm-clock: using sched offset of 10756453074 cycles Mar 6 01:34:57.255493 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 6 01:34:57.255499 kernel: tsc: Detected 2445.424 MHz processor Mar 6 01:34:57.255505 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 6 01:34:57.255512 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 6 01:34:57.255521 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 6 01:34:57.255528 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 6 01:34:57.255534 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 6 01:34:57.255540 kernel: Using GB pages for direct mapping Mar 6 01:34:57.255547 kernel: ACPI: Early table checksum verification disabled Mar 6 01:34:57.255553 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 6 01:34:57.255559 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 01:34:57.255566 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 01:34:57.255572 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 01:34:57.255581 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 6 01:34:57.255587 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 01:34:57.255593 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 01:34:57.255599 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 01:34:57.255606 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 01:34:57.255612 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 6 01:34:57.255618 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 6 01:34:57.255628 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 6 01:34:57.255638 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 6 01:34:57.255644 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 6 01:34:57.255651 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 6 01:34:57.255657 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 6 01:34:57.255664 kernel: No NUMA configuration found Mar 6 01:34:57.255670 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 6 01:34:57.255679 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Mar 6 01:34:57.255686 kernel: Zone ranges: Mar 6 01:34:57.255727 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 6 01:34:57.255733 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 6 01:34:57.255740 kernel: Normal empty Mar 6 01:34:57.255746 kernel: Movable zone start for each node Mar 6 01:34:57.255753 kernel: Early memory node ranges Mar 6 01:34:57.255759 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 6 01:34:57.255765 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 6 01:34:57.255772 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 6 01:34:57.255782 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 6 01:34:57.255802 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 6 01:34:57.255809 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 6 01:34:57.255816 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 6 01:34:57.255822 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 6 01:34:57.255829 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 6 01:34:57.255835 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 6 01:34:57.255841 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 6 01:34:57.255848 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 6 01:34:57.255858 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 6 01:34:57.255864 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 6 01:34:57.255871 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 6 01:34:57.255877 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 6 01:34:57.255883 kernel: TSC deadline timer available Mar 6 01:34:57.255923 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 6 01:34:57.255929 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 6 01:34:57.255936 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 6 01:34:57.255956 kernel: kvm-guest: setup PV sched yield Mar 6 01:34:57.255967 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 6 01:34:57.255974 kernel: Booting paravirtualized kernel on KVM Mar 6 01:34:57.255980 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 6 01:34:57.255987 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 6 01:34:57.255993 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 6 01:34:57.256000 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 6 01:34:57.256006 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 6 01:34:57.256012 kernel: kvm-guest: PV spinlocks enabled Mar 6 01:34:57.256018 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 6 01:34:57.256029 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a6bcd99e714cc2f1b95dc0d61d9d762252de26a434f12074c16f59200c97ba9c Mar 6 01:34:57.256036 kernel: random: crng init done Mar 6 01:34:57.256042 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 6 01:34:57.256049 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 6 01:34:57.256055 kernel: Fallback order for Node 0: 0 Mar 6 01:34:57.256062 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Mar 6 01:34:57.256068 kernel: Policy zone: DMA32 Mar 6 01:34:57.256075 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 6 01:34:57.256084 kernel: Memory: 2434604K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 136888K reserved, 0K cma-reserved) Mar 6 01:34:57.256091 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 6 01:34:57.256097 kernel: ftrace: allocating 37996 entries in 149 pages Mar 6 01:34:57.256104 kernel: ftrace: allocated 149 pages with 4 groups Mar 6 01:34:57.256110 kernel: Dynamic Preempt: voluntary Mar 6 01:34:57.256117 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 6 01:34:57.256124 kernel: rcu: RCU event tracing is enabled. Mar 6 01:34:57.256131 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 6 01:34:57.256137 kernel: Trampoline variant of Tasks RCU enabled. Mar 6 01:34:57.256147 kernel: Rude variant of Tasks RCU enabled. Mar 6 01:34:57.256154 kernel: Tracing variant of Tasks RCU enabled. Mar 6 01:34:57.256160 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 6 01:34:57.256167 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 6 01:34:57.256188 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 6 01:34:57.256195 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 6 01:34:57.256201 kernel: Console: colour VGA+ 80x25 Mar 6 01:34:57.256208 kernel: printk: console [ttyS0] enabled Mar 6 01:34:57.256214 kernel: ACPI: Core revision 20230628 Mar 6 01:34:57.256224 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 6 01:34:57.256230 kernel: APIC: Switch to symmetric I/O mode setup Mar 6 01:34:57.256237 kernel: x2apic enabled Mar 6 01:34:57.256243 kernel: APIC: Switched APIC routing to: physical x2apic Mar 6 01:34:57.256249 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 6 01:34:57.256256 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 6 01:34:57.256262 kernel: kvm-guest: setup PV IPIs Mar 6 01:34:57.256269 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 6 01:34:57.256289 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 6 01:34:57.256296 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Mar 6 01:34:57.256302 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 6 01:34:57.256312 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 6 01:34:57.256322 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 6 01:34:57.256329 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 6 01:34:57.256336 kernel: Spectre V2 : Mitigation: Retpolines Mar 6 01:34:57.256343 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 6 01:34:57.256350 kernel: Speculative Store Bypass: Vulnerable Mar 6 01:34:57.256360 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 6 01:34:57.256381 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 6 01:34:57.256388 kernel: active return thunk: srso_alias_return_thunk Mar 6 01:34:57.256394 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 6 01:34:57.256401 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 6 01:34:57.256408 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 6 01:34:57.256415 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 6 01:34:57.256421 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 6 01:34:57.256432 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 6 01:34:57.256438 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 6 01:34:57.256445 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 6 01:34:57.256452 kernel: Freeing SMP alternatives memory: 32K Mar 6 01:34:57.256459 kernel: pid_max: default: 32768 minimum: 301 Mar 6 01:34:57.256465 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 6 01:34:57.256472 kernel: landlock: Up and running. Mar 6 01:34:57.256479 kernel: SELinux: Initializing. Mar 6 01:34:57.256486 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 6 01:34:57.256495 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 6 01:34:57.256502 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 6 01:34:57.256509 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 6 01:34:57.256516 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 6 01:34:57.256523 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 6 01:34:57.256529 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 6 01:34:57.256536 kernel: signal: max sigframe size: 1776 Mar 6 01:34:57.256556 kernel: rcu: Hierarchical SRCU implementation. Mar 6 01:34:57.256564 kernel: rcu: Max phase no-delay instances is 400. Mar 6 01:34:57.256574 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 6 01:34:57.256580 kernel: smp: Bringing up secondary CPUs ... Mar 6 01:34:57.256587 kernel: smpboot: x86: Booting SMP configuration: Mar 6 01:34:57.256594 kernel: .... node #0, CPUs: #1 #2 #3 Mar 6 01:34:57.256601 kernel: smp: Brought up 1 node, 4 CPUs Mar 6 01:34:57.256607 kernel: smpboot: Max logical packages: 1 Mar 6 01:34:57.256614 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Mar 6 01:34:57.256621 kernel: devtmpfs: initialized Mar 6 01:34:57.256627 kernel: x86/mm: Memory block size: 128MB Mar 6 01:34:57.256637 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 6 01:34:57.256644 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 6 01:34:57.256650 kernel: pinctrl core: initialized pinctrl subsystem Mar 6 01:34:57.256657 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 6 01:34:57.256664 kernel: audit: initializing netlink subsys (disabled) Mar 6 01:34:57.256670 kernel: audit: type=2000 audit(1772760893.277:1): state=initialized audit_enabled=0 res=1 Mar 6 01:34:57.256677 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 6 01:34:57.256684 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 6 01:34:57.256717 kernel: cpuidle: using governor menu Mar 6 01:34:57.256727 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 6 01:34:57.256734 kernel: dca service started, version 1.12.1 Mar 6 01:34:57.256741 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 6 01:34:57.256748 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 6 01:34:57.256754 kernel: PCI: Using configuration type 1 for base access Mar 6 01:34:57.256761 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 6 01:34:57.256768 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 6 01:34:57.256775 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 6 01:34:57.256782 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 6 01:34:57.256791 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 6 01:34:57.256798 kernel: ACPI: Added _OSI(Module Device) Mar 6 01:34:57.256805 kernel: ACPI: Added _OSI(Processor Device) Mar 6 01:34:57.256812 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 6 01:34:57.256818 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 6 01:34:57.256825 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 6 01:34:57.256832 kernel: ACPI: Interpreter enabled Mar 6 01:34:57.256838 kernel: ACPI: PM: (supports S0 S3 S5) Mar 6 01:34:57.256845 kernel: ACPI: Using IOAPIC for interrupt routing Mar 6 01:34:57.256855 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 6 01:34:57.256862 kernel: PCI: Using E820 reservations for host bridge windows Mar 6 01:34:57.256868 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 6 01:34:57.256875 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 6 01:34:57.257267 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 6 01:34:57.257463 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 6 01:34:57.257617 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 6 01:34:57.257633 kernel: PCI host bridge to bus 0000:00 Mar 6 01:34:57.257836 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 6 01:34:57.258167 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 6 01:34:57.258384 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 6 01:34:57.258600 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 6 01:34:57.258844 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 6 01:34:57.259049 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 6 01:34:57.259196 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 6 01:34:57.259359 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 6 01:34:57.259515 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 6 01:34:57.259661 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 6 01:34:57.259852 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 6 01:34:57.260046 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 6 01:34:57.260195 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 6 01:34:57.260358 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 6 01:34:57.260505 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 6 01:34:57.260650 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 6 01:34:57.260836 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 6 01:34:57.261092 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 6 01:34:57.261411 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Mar 6 01:34:57.261589 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 6 01:34:57.262039 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 6 01:34:57.262218 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 6 01:34:57.262421 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Mar 6 01:34:57.262569 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Mar 6 01:34:57.262779 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 6 01:34:57.263029 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 6 01:34:57.263193 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 6 01:34:57.263340 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 6 01:34:57.263495 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 6 01:34:57.263641 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Mar 6 01:34:57.263874 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Mar 6 01:34:57.264139 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 6 01:34:57.264286 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 6 01:34:57.264301 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 6 01:34:57.264309 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 6 01:34:57.264316 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 6 01:34:57.264323 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 6 01:34:57.264330 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 6 01:34:57.264337 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 6 01:34:57.264344 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 6 01:34:57.264350 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 6 01:34:57.264357 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 6 01:34:57.264367 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 6 01:34:57.264374 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 6 01:34:57.264381 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 6 01:34:57.264388 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 6 01:34:57.264395 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 6 01:34:57.264401 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 6 01:34:57.264408 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 6 01:34:57.264415 kernel: iommu: Default domain type: Translated Mar 6 01:34:57.264422 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 6 01:34:57.264432 kernel: PCI: Using ACPI for IRQ routing Mar 6 01:34:57.264439 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 6 01:34:57.264446 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 6 01:34:57.264452 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 6 01:34:57.264597 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 6 01:34:57.264848 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 6 01:34:57.265205 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 6 01:34:57.265217 kernel: vgaarb: loaded Mar 6 01:34:57.265230 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 6 01:34:57.265237 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 6 01:34:57.265244 kernel: clocksource: Switched to clocksource kvm-clock Mar 6 01:34:57.265251 kernel: VFS: Disk quotas dquot_6.6.0 Mar 6 01:34:57.265258 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 6 01:34:57.265265 kernel: pnp: PnP ACPI init Mar 6 01:34:57.265422 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 6 01:34:57.265432 kernel: pnp: PnP ACPI: found 6 devices Mar 6 01:34:57.265444 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 6 01:34:57.265451 kernel: NET: Registered PF_INET protocol family Mar 6 01:34:57.265458 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 6 01:34:57.265465 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 6 01:34:57.265472 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 6 01:34:57.265479 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 6 01:34:57.265486 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 6 01:34:57.265492 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 6 01:34:57.265499 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 6 01:34:57.265509 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 6 01:34:57.265516 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 6 01:34:57.265523 kernel: NET: Registered PF_XDP protocol family Mar 6 01:34:57.265659 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 6 01:34:57.266158 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 6 01:34:57.266545 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 6 01:34:57.266686 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 6 01:34:57.266991 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 6 01:34:57.267137 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 6 01:34:57.267147 kernel: PCI: CLS 0 bytes, default 64 Mar 6 01:34:57.267154 kernel: Initialise system trusted keyrings Mar 6 01:34:57.267161 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 6 01:34:57.267168 kernel: Key type asymmetric registered Mar 6 01:34:57.267175 kernel: Asymmetric key parser 'x509' registered Mar 6 01:34:57.267182 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 6 01:34:57.267189 kernel: io scheduler mq-deadline registered Mar 6 01:34:57.267196 kernel: io scheduler kyber registered Mar 6 01:34:57.267207 kernel: io scheduler bfq registered Mar 6 01:34:57.267213 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 6 01:34:57.267221 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 6 01:34:57.267228 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 6 01:34:57.267235 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 6 01:34:57.267242 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 6 01:34:57.267249 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 6 01:34:57.267256 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 6 01:34:57.267263 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 6 01:34:57.267270 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 6 01:34:57.267422 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 6 01:34:57.267433 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 6 01:34:57.267571 kernel: rtc_cmos 00:04: registered as rtc0 Mar 6 01:34:57.267759 kernel: rtc_cmos 00:04: setting system clock to 2026-03-06T01:34:56 UTC (1772760896) Mar 6 01:34:57.267980 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 6 01:34:57.267993 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 6 01:34:57.268000 kernel: NET: Registered PF_INET6 protocol family Mar 6 01:34:57.268012 kernel: Segment Routing with IPv6 Mar 6 01:34:57.268019 kernel: In-situ OAM (IOAM) with IPv6 Mar 6 01:34:57.268026 kernel: NET: Registered PF_PACKET protocol family Mar 6 01:34:57.268033 kernel: Key type dns_resolver registered Mar 6 01:34:57.268040 kernel: IPI shorthand broadcast: enabled Mar 6 01:34:57.268047 kernel: sched_clock: Marking stable (2017020643, 486659368)->(3484542538, -980862527) Mar 6 01:34:57.268054 kernel: registered taskstats version 1 Mar 6 01:34:57.268061 kernel: Loading compiled-in X.509 certificates Mar 6 01:34:57.268068 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 6d88f6264570591a57b3c9c1e1c99fca6c68b8ca' Mar 6 01:34:57.268078 kernel: Key type .fscrypt registered Mar 6 01:34:57.268085 kernel: Key type fscrypt-provisioning registered Mar 6 01:34:57.268092 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 6 01:34:57.268099 kernel: ima: Allocated hash algorithm: sha1 Mar 6 01:34:57.268106 kernel: ima: No architecture policies found Mar 6 01:34:57.268113 kernel: clk: Disabling unused clocks Mar 6 01:34:57.268120 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 6 01:34:57.268127 kernel: Write protecting the kernel read-only data: 36864k Mar 6 01:34:57.268134 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 6 01:34:57.268144 kernel: Run /init as init process Mar 6 01:34:57.268151 kernel: with arguments: Mar 6 01:34:57.268158 kernel: /init Mar 6 01:34:57.268165 kernel: with environment: Mar 6 01:34:57.268171 kernel: HOME=/ Mar 6 01:34:57.268178 kernel: TERM=linux Mar 6 01:34:57.268187 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 6 01:34:57.268196 systemd[1]: Detected virtualization kvm. Mar 6 01:34:57.268207 systemd[1]: Detected architecture x86-64. Mar 6 01:34:57.268214 systemd[1]: Running in initrd. Mar 6 01:34:57.268221 systemd[1]: No hostname configured, using default hostname. Mar 6 01:34:57.268228 systemd[1]: Hostname set to . Mar 6 01:34:57.268236 systemd[1]: Initializing machine ID from VM UUID. Mar 6 01:34:57.268243 systemd[1]: Queued start job for default target initrd.target. Mar 6 01:34:57.268250 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 6 01:34:57.268258 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 6 01:34:57.268268 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 6 01:34:57.268276 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 6 01:34:57.268283 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 6 01:34:57.268291 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 6 01:34:57.268300 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 6 01:34:57.268307 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 6 01:34:57.268315 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 6 01:34:57.268325 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 6 01:34:57.268332 systemd[1]: Reached target paths.target - Path Units. Mar 6 01:34:57.268340 systemd[1]: Reached target slices.target - Slice Units. Mar 6 01:34:57.268347 systemd[1]: Reached target swap.target - Swaps. Mar 6 01:34:57.268369 systemd[1]: Reached target timers.target - Timer Units. Mar 6 01:34:57.268380 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 6 01:34:57.268390 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 6 01:34:57.268398 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 6 01:34:57.268406 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 6 01:34:57.268413 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 6 01:34:57.268421 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 6 01:34:57.268428 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 6 01:34:57.268436 systemd[1]: Reached target sockets.target - Socket Units. Mar 6 01:34:57.268443 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 6 01:34:57.268451 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 6 01:34:57.268461 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 6 01:34:57.268469 systemd[1]: Starting systemd-fsck-usr.service... Mar 6 01:34:57.268476 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 6 01:34:57.268483 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 6 01:34:57.268491 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 01:34:57.268498 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 6 01:34:57.268528 systemd-journald[195]: Collecting audit messages is disabled. Mar 6 01:34:57.268549 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 6 01:34:57.268557 systemd[1]: Finished systemd-fsck-usr.service. Mar 6 01:34:57.268569 systemd-journald[195]: Journal started Mar 6 01:34:57.268585 systemd-journald[195]: Runtime Journal (/run/log/journal/43f2b88113794902abf625e2ca1e674d) is 6.0M, max 48.4M, 42.3M free. Mar 6 01:34:57.266548 systemd-modules-load[196]: Inserted module 'overlay' Mar 6 01:34:57.273296 systemd[1]: Started systemd-journald.service - Journal Service. Mar 6 01:34:57.288136 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 6 01:34:57.451140 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 6 01:34:57.451178 kernel: Bridge firewalling registered Mar 6 01:34:57.298181 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 6 01:34:57.312263 systemd-modules-load[196]: Inserted module 'br_netfilter' Mar 6 01:34:57.457557 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 6 01:34:57.473456 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 01:34:57.479466 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 6 01:34:57.498960 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 6 01:34:57.508028 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 6 01:34:57.520844 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 6 01:34:57.534312 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 6 01:34:57.600457 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 6 01:34:57.607075 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 6 01:34:57.633364 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 6 01:34:57.638856 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 6 01:34:57.650170 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 6 01:34:57.702053 dracut-cmdline[229]: dracut-dracut-053 Mar 6 01:34:57.716238 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a6bcd99e714cc2f1b95dc0d61d9d762252de26a434f12074c16f59200c97ba9c Mar 6 01:34:57.778492 systemd-resolved[230]: Positive Trust Anchors: Mar 6 01:34:57.778527 systemd-resolved[230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 6 01:34:57.778593 systemd-resolved[230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 6 01:34:57.804215 systemd-resolved[230]: Defaulting to hostname 'linux'. Mar 6 01:34:57.808632 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 6 01:34:57.809843 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 6 01:34:57.856971 kernel: SCSI subsystem initialized Mar 6 01:34:57.883668 kernel: Loading iSCSI transport class v2.0-870. Mar 6 01:34:57.894949 kernel: iscsi: registered transport (tcp) Mar 6 01:34:57.922563 kernel: iscsi: registered transport (qla4xxx) Mar 6 01:34:57.922670 kernel: QLogic iSCSI HBA Driver Mar 6 01:34:58.002792 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 6 01:34:58.019150 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 6 01:34:58.055233 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 6 01:34:58.055319 kernel: device-mapper: uevent: version 1.0.3 Mar 6 01:34:58.055351 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 6 01:34:58.118995 kernel: raid6: avx2x4 gen() 31132 MB/s Mar 6 01:34:58.136989 kernel: raid6: avx2x2 gen() 28957 MB/s Mar 6 01:34:58.156046 kernel: raid6: avx2x1 gen() 22910 MB/s Mar 6 01:34:58.156118 kernel: raid6: using algorithm avx2x4 gen() 31132 MB/s Mar 6 01:34:58.178125 kernel: raid6: .... xor() 4094 MB/s, rmw enabled Mar 6 01:34:58.178244 kernel: raid6: using avx2x2 recovery algorithm Mar 6 01:34:58.200003 kernel: xor: automatically using best checksumming function avx Mar 6 01:34:58.384946 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 6 01:34:58.403553 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 6 01:34:58.425165 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 6 01:34:58.440502 systemd-udevd[414]: Using default interface naming scheme 'v255'. Mar 6 01:34:58.446947 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 6 01:34:58.466307 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 6 01:34:58.486617 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation Mar 6 01:34:58.531952 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 6 01:34:58.552148 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 6 01:34:58.647392 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 6 01:34:58.662209 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 6 01:34:58.700335 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 6 01:34:58.711086 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 6 01:34:58.719437 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 6 01:34:58.728042 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 6 01:34:58.739960 kernel: cryptd: max_cpu_qlen set to 1000 Mar 6 01:34:58.747434 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 6 01:34:58.748168 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 6 01:34:58.764139 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 6 01:34:58.764307 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 6 01:34:58.785344 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 6 01:34:58.778195 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 6 01:34:58.787504 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 6 01:34:58.810025 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 6 01:34:58.810069 kernel: GPT:9289727 != 19775487 Mar 6 01:34:58.810081 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 6 01:34:58.810091 kernel: GPT:9289727 != 19775487 Mar 6 01:34:58.810101 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 6 01:34:58.810111 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 6 01:34:58.810120 kernel: libata version 3.00 loaded. Mar 6 01:34:58.787641 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 01:34:58.789939 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 01:34:58.827163 kernel: AVX2 version of gcm_enc/dec engaged. Mar 6 01:34:58.827201 kernel: ahci 0000:00:1f.2: version 3.0 Mar 6 01:34:58.831195 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 6 01:34:58.831232 kernel: AES CTR mode by8 optimization enabled Mar 6 01:34:58.835010 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 6 01:34:58.835298 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 6 01:34:58.837493 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 01:34:58.866949 kernel: scsi host0: ahci Mar 6 01:34:58.867247 kernel: scsi host1: ahci Mar 6 01:34:58.867427 kernel: scsi host2: ahci Mar 6 01:34:58.867604 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (463) Mar 6 01:34:58.867617 kernel: scsi host3: ahci Mar 6 01:34:58.867853 kernel: BTRFS: device fsid eccec0b1-0068-4620-ab61-f332f16460fa devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (460) Mar 6 01:34:58.867878 kernel: scsi host4: ahci Mar 6 01:34:58.868183 kernel: scsi host5: ahci Mar 6 01:34:58.861392 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 6 01:34:58.871048 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Mar 6 01:34:58.871065 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Mar 6 01:34:58.871075 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Mar 6 01:34:58.871091 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Mar 6 01:34:58.871101 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Mar 6 01:34:58.871111 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Mar 6 01:34:58.892553 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 6 01:34:58.897955 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 6 01:34:59.029266 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 01:34:59.037790 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 6 01:34:59.044680 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 6 01:34:59.051354 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 6 01:34:59.069174 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 6 01:34:59.073484 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 6 01:34:59.086676 disk-uuid[555]: Primary Header is updated. Mar 6 01:34:59.086676 disk-uuid[555]: Secondary Entries is updated. Mar 6 01:34:59.086676 disk-uuid[555]: Secondary Header is updated. Mar 6 01:34:59.097041 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 6 01:34:59.097073 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 6 01:34:59.106325 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 6 01:34:59.179961 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 6 01:34:59.182943 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 6 01:34:59.182971 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 6 01:34:59.185932 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 6 01:34:59.188958 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 6 01:34:59.193163 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 6 01:34:59.193187 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 6 01:34:59.196021 kernel: ata3.00: applying bridge limits Mar 6 01:34:59.197597 kernel: ata3.00: configured for UDMA/100 Mar 6 01:34:59.200964 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 6 01:34:59.379529 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 6 01:34:59.382493 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 6 01:34:59.398944 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 6 01:35:00.109036 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 6 01:35:00.109224 disk-uuid[556]: The operation has completed successfully. Mar 6 01:35:00.148974 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 6 01:35:00.149160 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 6 01:35:00.171149 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 6 01:35:00.177607 sh[592]: Success Mar 6 01:35:00.195022 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 6 01:35:00.248441 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 6 01:35:00.271348 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 6 01:35:00.275269 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 6 01:35:00.297766 kernel: BTRFS info (device dm-0): first mount of filesystem eccec0b1-0068-4620-ab61-f332f16460fa Mar 6 01:35:00.297833 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 6 01:35:00.297850 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 6 01:35:00.301352 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 6 01:35:00.306184 kernel: BTRFS info (device dm-0): using free space tree Mar 6 01:35:00.316313 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 6 01:35:00.318511 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 6 01:35:00.333185 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 6 01:35:00.336798 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 6 01:35:00.354751 kernel: BTRFS info (device vda6): first mount of filesystem dcd455b6-671f-4d9f-a5ce-de07977c88a5 Mar 6 01:35:00.354804 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 6 01:35:00.354821 kernel: BTRFS info (device vda6): using free space tree Mar 6 01:35:00.364625 kernel: BTRFS info (device vda6): auto enabling async discard Mar 6 01:35:00.382384 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 6 01:35:00.389306 kernel: BTRFS info (device vda6): last unmount of filesystem dcd455b6-671f-4d9f-a5ce-de07977c88a5 Mar 6 01:35:00.397254 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 6 01:35:00.409143 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 6 01:35:00.481327 ignition[696]: Ignition 2.19.0 Mar 6 01:35:00.481353 ignition[696]: Stage: fetch-offline Mar 6 01:35:00.481394 ignition[696]: no configs at "/usr/lib/ignition/base.d" Mar 6 01:35:00.481405 ignition[696]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 01:35:00.481529 ignition[696]: parsed url from cmdline: "" Mar 6 01:35:00.481537 ignition[696]: no config URL provided Mar 6 01:35:00.481546 ignition[696]: reading system config file "/usr/lib/ignition/user.ign" Mar 6 01:35:00.481558 ignition[696]: no config at "/usr/lib/ignition/user.ign" Mar 6 01:35:00.481589 ignition[696]: op(1): [started] loading QEMU firmware config module Mar 6 01:35:00.481599 ignition[696]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 6 01:35:00.490309 ignition[696]: op(1): [finished] loading QEMU firmware config module Mar 6 01:35:00.507564 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 6 01:35:00.525109 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 6 01:35:00.552565 systemd-networkd[781]: lo: Link UP Mar 6 01:35:00.552589 systemd-networkd[781]: lo: Gained carrier Mar 6 01:35:00.554755 systemd-networkd[781]: Enumeration completed Mar 6 01:35:00.554991 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 6 01:35:00.556393 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 01:35:00.556398 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 6 01:35:00.557668 systemd-networkd[781]: eth0: Link UP Mar 6 01:35:00.557673 systemd-networkd[781]: eth0: Gained carrier Mar 6 01:35:00.557680 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 01:35:00.574034 systemd[1]: Reached target network.target - Network. Mar 6 01:35:00.624976 systemd-networkd[781]: eth0: DHCPv4 address 10.0.0.76/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 6 01:35:00.680062 ignition[696]: parsing config with SHA512: 2639e2582ceab6ac559b2482c5e199c16bbbca341dd261d6c5ab741674e0e4b1fd9a60cb161721d52ce3920666d5dae74c6fab13fa821874519843f302cb88c5 Mar 6 01:35:00.684092 unknown[696]: fetched base config from "system" Mar 6 01:35:00.684119 unknown[696]: fetched user config from "qemu" Mar 6 01:35:00.684419 ignition[696]: fetch-offline: fetch-offline passed Mar 6 01:35:00.684482 ignition[696]: Ignition finished successfully Mar 6 01:35:00.694104 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 6 01:35:00.699988 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 6 01:35:00.713216 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 6 01:35:00.733636 ignition[785]: Ignition 2.19.0 Mar 6 01:35:00.733661 ignition[785]: Stage: kargs Mar 6 01:35:00.733854 ignition[785]: no configs at "/usr/lib/ignition/base.d" Mar 6 01:35:00.733868 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 01:35:00.734858 ignition[785]: kargs: kargs passed Mar 6 01:35:00.734971 ignition[785]: Ignition finished successfully Mar 6 01:35:00.747290 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 6 01:35:00.762166 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 6 01:35:00.779849 ignition[795]: Ignition 2.19.0 Mar 6 01:35:00.779859 ignition[795]: Stage: disks Mar 6 01:35:00.780086 ignition[795]: no configs at "/usr/lib/ignition/base.d" Mar 6 01:35:00.780099 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 01:35:00.780782 ignition[795]: disks: disks passed Mar 6 01:35:00.780832 ignition[795]: Ignition finished successfully Mar 6 01:35:00.793381 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 6 01:35:00.794634 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 6 01:35:00.799553 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 6 01:35:00.804447 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 6 01:35:00.810743 systemd[1]: Reached target sysinit.target - System Initialization. Mar 6 01:35:00.815413 systemd[1]: Reached target basic.target - Basic System. Mar 6 01:35:00.831105 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 6 01:35:00.846770 systemd-resolved[230]: Detected conflict on linux IN A 10.0.0.76 Mar 6 01:35:00.846825 systemd-resolved[230]: Hostname conflict, changing published hostname from 'linux' to 'linux10'. Mar 6 01:35:00.851726 systemd-fsck[804]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 6 01:35:00.859968 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 6 01:35:00.872120 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 6 01:35:00.967959 kernel: EXT4-fs (vda9): mounted filesystem 6fb83788-0471-4e89-b45f-3a7586a627a9 r/w with ordered data mode. Quota mode: none. Mar 6 01:35:00.968624 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 6 01:35:00.971468 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 6 01:35:00.985042 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 6 01:35:00.988294 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 6 01:35:01.000810 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (812) Mar 6 01:35:00.996023 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 6 01:35:01.016617 kernel: BTRFS info (device vda6): first mount of filesystem dcd455b6-671f-4d9f-a5ce-de07977c88a5 Mar 6 01:35:01.016653 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 6 01:35:01.016665 kernel: BTRFS info (device vda6): using free space tree Mar 6 01:35:01.016676 kernel: BTRFS info (device vda6): auto enabling async discard Mar 6 01:35:00.996079 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 6 01:35:00.996108 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 6 01:35:01.002124 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 6 01:35:01.018187 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 6 01:35:01.037120 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 6 01:35:01.078723 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Mar 6 01:35:01.083844 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Mar 6 01:35:01.089227 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Mar 6 01:35:01.093633 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Mar 6 01:35:01.208669 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 6 01:35:01.225072 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 6 01:35:01.230418 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 6 01:35:01.238004 kernel: BTRFS info (device vda6): last unmount of filesystem dcd455b6-671f-4d9f-a5ce-de07977c88a5 Mar 6 01:35:01.264357 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 6 01:35:01.273038 ignition[925]: INFO : Ignition 2.19.0 Mar 6 01:35:01.273038 ignition[925]: INFO : Stage: mount Mar 6 01:35:01.276654 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 6 01:35:01.276654 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 01:35:01.276654 ignition[925]: INFO : mount: mount passed Mar 6 01:35:01.276654 ignition[925]: INFO : Ignition finished successfully Mar 6 01:35:01.286274 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 6 01:35:01.293133 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 6 01:35:01.310109 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 6 01:35:01.318038 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 6 01:35:01.335976 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (938) Mar 6 01:35:01.336026 kernel: BTRFS info (device vda6): first mount of filesystem dcd455b6-671f-4d9f-a5ce-de07977c88a5 Mar 6 01:35:01.336038 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 6 01:35:01.339735 kernel: BTRFS info (device vda6): using free space tree Mar 6 01:35:01.345949 kernel: BTRFS info (device vda6): auto enabling async discard Mar 6 01:35:01.348213 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 6 01:35:01.383563 ignition[955]: INFO : Ignition 2.19.0 Mar 6 01:35:01.383563 ignition[955]: INFO : Stage: files Mar 6 01:35:01.388221 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 6 01:35:01.388221 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 01:35:01.388221 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Mar 6 01:35:01.388221 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 6 01:35:01.388221 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 6 01:35:01.404366 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 6 01:35:01.404366 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 6 01:35:01.404366 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 6 01:35:01.404366 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 6 01:35:01.404366 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 6 01:35:01.389923 unknown[955]: wrote ssh authorized keys file for user: core Mar 6 01:35:01.464616 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 6 01:35:01.563082 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 6 01:35:01.563082 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 6 01:35:01.574958 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 6 01:35:01.574958 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 6 01:35:01.574958 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 6 01:35:01.574958 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 6 01:35:01.574958 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 6 01:35:01.574958 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 6 01:35:01.574958 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 6 01:35:01.574958 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 6 01:35:01.574958 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 6 01:35:01.574958 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 6 01:35:01.574958 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 6 01:35:01.574958 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 6 01:35:01.574958 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 6 01:35:01.747178 systemd-networkd[781]: eth0: Gained IPv6LL Mar 6 01:35:01.896089 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 6 01:35:02.292932 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 6 01:35:02.292932 ignition[955]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 6 01:35:02.306413 ignition[955]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 6 01:35:02.306413 ignition[955]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 6 01:35:02.306413 ignition[955]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 6 01:35:02.306413 ignition[955]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 6 01:35:02.306413 ignition[955]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 6 01:35:02.306413 ignition[955]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 6 01:35:02.306413 ignition[955]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 6 01:35:02.306413 ignition[955]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 6 01:35:02.352343 ignition[955]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 6 01:35:02.352343 ignition[955]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 6 01:35:02.352343 ignition[955]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 6 01:35:02.352343 ignition[955]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 6 01:35:02.352343 ignition[955]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 6 01:35:02.352343 ignition[955]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 6 01:35:02.352343 ignition[955]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 6 01:35:02.352343 ignition[955]: INFO : files: files passed Mar 6 01:35:02.352343 ignition[955]: INFO : Ignition finished successfully Mar 6 01:35:02.333578 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 6 01:35:02.357084 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 6 01:35:02.366540 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 6 01:35:02.372828 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 6 01:35:02.421428 initrd-setup-root-after-ignition[982]: grep: /sysroot/oem/oem-release: No such file or directory Mar 6 01:35:02.373036 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 6 01:35:02.428121 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 6 01:35:02.428121 initrd-setup-root-after-ignition[984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 6 01:35:02.386509 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 6 01:35:02.450565 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 6 01:35:02.394347 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 6 01:35:02.402957 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 6 01:35:02.434281 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 6 01:35:02.434419 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 6 01:35:02.439147 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 6 01:35:02.441947 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 6 01:35:02.444624 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 6 01:35:02.445530 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 6 01:35:02.465686 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 6 01:35:02.488263 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 6 01:35:02.502666 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 6 01:35:02.506006 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 6 01:35:02.512030 systemd[1]: Stopped target timers.target - Timer Units. Mar 6 01:35:02.517185 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 6 01:35:02.517401 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 6 01:35:02.523359 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 6 01:35:02.528216 systemd[1]: Stopped target basic.target - Basic System. Mar 6 01:35:02.533480 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 6 01:35:02.538879 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 6 01:35:02.544135 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 6 01:35:02.550012 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 6 01:35:02.555850 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 6 01:35:02.561798 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 6 01:35:02.569190 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 6 01:35:02.575057 systemd[1]: Stopped target swap.target - Swaps. Mar 6 01:35:02.579860 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 6 01:35:02.580070 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 6 01:35:02.585732 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 6 01:35:02.589966 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 6 01:35:02.595445 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 6 01:35:02.595642 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 6 01:35:02.601381 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 6 01:35:02.601520 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 6 01:35:02.607500 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 6 01:35:02.607642 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 6 01:35:02.613679 systemd[1]: Stopped target paths.target - Path Units. Mar 6 01:35:02.621555 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 6 01:35:02.621980 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 6 01:35:02.632178 systemd[1]: Stopped target slices.target - Slice Units. Mar 6 01:35:02.639999 systemd[1]: Stopped target sockets.target - Socket Units. Mar 6 01:35:02.648408 systemd[1]: iscsid.socket: Deactivated successfully. Mar 6 01:35:02.648589 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 6 01:35:02.656339 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 6 01:35:02.656536 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 6 01:35:02.666780 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 6 01:35:02.667174 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 6 01:35:02.674638 systemd[1]: ignition-files.service: Deactivated successfully. Mar 6 01:35:02.674989 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 6 01:35:02.724226 ignition[1009]: INFO : Ignition 2.19.0 Mar 6 01:35:02.724226 ignition[1009]: INFO : Stage: umount Mar 6 01:35:02.724226 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 6 01:35:02.724226 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 01:35:02.724226 ignition[1009]: INFO : umount: umount passed Mar 6 01:35:02.724226 ignition[1009]: INFO : Ignition finished successfully Mar 6 01:35:02.697198 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 6 01:35:02.701647 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 6 01:35:02.701958 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 6 01:35:02.709167 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 6 01:35:02.713119 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 6 01:35:02.713262 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 6 01:35:02.716568 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 6 01:35:02.716762 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 6 01:35:02.724058 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 6 01:35:02.724199 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 6 01:35:02.729116 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 6 01:35:02.729265 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 6 01:35:02.734465 systemd[1]: Stopped target network.target - Network. Mar 6 01:35:02.739245 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 6 01:35:02.739334 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 6 01:35:02.745131 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 6 01:35:02.745206 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 6 01:35:02.751317 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 6 01:35:02.751379 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 6 01:35:02.756093 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 6 01:35:02.756157 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 6 01:35:02.761646 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 6 01:35:02.768785 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 6 01:35:02.773018 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 6 01:35:02.775131 systemd-networkd[781]: eth0: DHCPv6 lease lost Mar 6 01:35:02.779825 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 6 01:35:02.780056 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 6 01:35:02.787971 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 6 01:35:02.788143 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 6 01:35:02.795812 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 6 01:35:02.795944 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 6 01:35:02.814126 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 6 01:35:02.818341 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 6 01:35:02.818442 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 6 01:35:02.825787 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 6 01:35:02.825963 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 6 01:35:02.831484 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 6 01:35:02.831566 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 6 01:35:02.837141 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 6 01:35:02.837219 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 6 01:35:02.843336 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 6 01:35:02.849493 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 6 01:35:02.849684 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 6 01:35:02.867476 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 6 01:35:02.867738 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 6 01:35:02.873601 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 6 01:35:02.873764 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 6 01:35:02.880075 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 6 01:35:02.880149 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 6 01:35:02.883792 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 6 01:35:02.883841 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 6 01:35:02.886622 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 6 01:35:02.886682 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 6 01:35:03.005154 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Mar 6 01:35:02.889932 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 6 01:35:02.889993 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 6 01:35:02.893962 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 6 01:35:02.894018 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 6 01:35:02.899504 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 6 01:35:02.899565 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 6 01:35:02.916136 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 6 01:35:02.921026 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 6 01:35:02.921104 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 6 01:35:02.927038 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 6 01:35:02.927095 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 01:35:02.933561 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 6 01:35:02.933722 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 6 01:35:02.939505 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 6 01:35:02.955107 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 6 01:35:02.968009 systemd[1]: Switching root. Mar 6 01:35:03.055410 systemd-journald[195]: Journal stopped Mar 6 01:35:04.664564 kernel: SELinux: policy capability network_peer_controls=1 Mar 6 01:35:04.664843 kernel: SELinux: policy capability open_perms=1 Mar 6 01:35:04.664874 kernel: SELinux: policy capability extended_socket_class=1 Mar 6 01:35:04.664970 kernel: SELinux: policy capability always_check_network=0 Mar 6 01:35:04.664999 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 6 01:35:04.665020 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 6 01:35:04.665040 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 6 01:35:04.665059 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 6 01:35:04.665081 kernel: audit: type=1403 audit(1772760903.198:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 6 01:35:04.665107 systemd[1]: Successfully loaded SELinux policy in 56.237ms. Mar 6 01:35:04.665142 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.128ms. Mar 6 01:35:04.665200 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 6 01:35:04.665224 systemd[1]: Detected virtualization kvm. Mar 6 01:35:04.665269 systemd[1]: Detected architecture x86-64. Mar 6 01:35:04.665291 systemd[1]: Detected first boot. Mar 6 01:35:04.665312 systemd[1]: Initializing machine ID from VM UUID. Mar 6 01:35:04.665332 zram_generator::config[1053]: No configuration found. Mar 6 01:35:04.665355 systemd[1]: Populated /etc with preset unit settings. Mar 6 01:35:04.665376 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 6 01:35:04.665403 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 6 01:35:04.665424 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 6 01:35:04.665445 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 6 01:35:04.665466 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 6 01:35:04.665486 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 6 01:35:04.665507 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 6 01:35:04.665528 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 6 01:35:04.665549 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 6 01:35:04.665576 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 6 01:35:04.665598 systemd[1]: Created slice user.slice - User and Session Slice. Mar 6 01:35:04.665619 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 6 01:35:04.665643 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 6 01:35:04.665663 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 6 01:35:04.665683 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 6 01:35:04.665746 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 6 01:35:04.665773 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 6 01:35:04.665794 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 6 01:35:04.665821 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 6 01:35:04.665843 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 6 01:35:04.665864 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 6 01:35:04.665935 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 6 01:35:04.665962 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 6 01:35:04.665984 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 6 01:35:04.666004 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 6 01:35:04.666031 systemd[1]: Reached target slices.target - Slice Units. Mar 6 01:35:04.666057 systemd[1]: Reached target swap.target - Swaps. Mar 6 01:35:04.666079 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 6 01:35:04.666098 kernel: hrtimer: interrupt took 3911635 ns Mar 6 01:35:04.666119 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 6 01:35:04.666140 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 6 01:35:04.666160 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 6 01:35:04.666181 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 6 01:35:04.666204 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 6 01:35:04.666224 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 6 01:35:04.666251 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 6 01:35:04.666273 systemd[1]: Mounting media.mount - External Media Directory... Mar 6 01:35:04.666294 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 01:35:04.666314 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 6 01:35:04.666334 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 6 01:35:04.666355 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 6 01:35:04.666377 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 6 01:35:04.666396 systemd[1]: Reached target machines.target - Containers. Mar 6 01:35:04.666417 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 6 01:35:04.666443 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 6 01:35:04.666464 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 6 01:35:04.666485 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 6 01:35:04.666507 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 6 01:35:04.666527 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 6 01:35:04.666547 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 6 01:35:04.666568 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 6 01:35:04.666588 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 6 01:35:04.666646 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 6 01:35:04.666672 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 6 01:35:04.666730 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 6 01:35:04.666757 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 6 01:35:04.666777 systemd[1]: Stopped systemd-fsck-usr.service. Mar 6 01:35:04.666796 kernel: fuse: init (API version 7.39) Mar 6 01:35:04.666824 kernel: ACPI: bus type drm_connector registered Mar 6 01:35:04.666845 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 6 01:35:04.666866 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 6 01:35:04.666956 kernel: loop: module loaded Mar 6 01:35:04.666982 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 6 01:35:04.667003 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 6 01:35:04.667024 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 6 01:35:04.667044 systemd[1]: verity-setup.service: Deactivated successfully. Mar 6 01:35:04.667065 systemd[1]: Stopped verity-setup.service. Mar 6 01:35:04.667086 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 01:35:04.667231 systemd-journald[1137]: Collecting audit messages is disabled. Mar 6 01:35:04.667283 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 6 01:35:04.667303 systemd-journald[1137]: Journal started Mar 6 01:35:04.667330 systemd-journald[1137]: Runtime Journal (/run/log/journal/43f2b88113794902abf625e2ca1e674d) is 6.0M, max 48.4M, 42.3M free. Mar 6 01:35:03.890785 systemd[1]: Queued start job for default target multi-user.target. Mar 6 01:35:03.909539 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 6 01:35:03.910367 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 6 01:35:03.910957 systemd[1]: systemd-journald.service: Consumed 1.559s CPU time. Mar 6 01:35:04.697015 systemd[1]: Started systemd-journald.service - Journal Service. Mar 6 01:35:04.712410 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 6 01:35:04.722866 systemd[1]: Mounted media.mount - External Media Directory. Mar 6 01:35:04.734577 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 6 01:35:04.745778 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 6 01:35:04.748877 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 6 01:35:04.761142 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 6 01:35:04.765154 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 6 01:35:04.768847 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 6 01:35:04.769327 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 6 01:35:04.773147 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 6 01:35:04.773431 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 6 01:35:04.777180 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 6 01:35:04.777473 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 6 01:35:04.780729 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 6 01:35:04.781057 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 6 01:35:04.784645 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 6 01:35:04.785020 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 6 01:35:04.788337 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 6 01:35:04.788619 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 6 01:35:04.792066 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 6 01:35:04.795289 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 6 01:35:04.798739 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 6 01:35:04.817001 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 6 01:35:04.835112 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 6 01:35:04.841104 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 6 01:35:04.844614 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 6 01:35:04.845405 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 6 01:35:04.851217 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 6 01:35:04.864340 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 6 01:35:04.870054 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 6 01:35:04.874130 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 6 01:35:04.877394 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 6 01:35:04.884054 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 6 01:35:04.888571 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 6 01:35:04.894175 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 6 01:35:04.897953 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 6 01:35:04.900248 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 6 01:35:04.907212 systemd-journald[1137]: Time spent on flushing to /var/log/journal/43f2b88113794902abf625e2ca1e674d is 108.337ms for 942 entries. Mar 6 01:35:04.907212 systemd-journald[1137]: System Journal (/var/log/journal/43f2b88113794902abf625e2ca1e674d) is 8.0M, max 195.6M, 187.6M free. Mar 6 01:35:05.058197 systemd-journald[1137]: Received client request to flush runtime journal. Mar 6 01:35:04.913171 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 6 01:35:04.933341 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 6 01:35:04.942566 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 6 01:35:04.949510 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 6 01:35:04.959289 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 6 01:35:05.003508 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 6 01:35:05.026063 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 6 01:35:05.042009 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 6 01:35:05.062400 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 6 01:35:05.068324 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 6 01:35:05.076935 kernel: loop0: detected capacity change from 0 to 142488 Mar 6 01:35:05.091546 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 6 01:35:05.106240 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 6 01:35:05.115321 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 6 01:35:05.116523 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 6 01:35:05.122447 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 6 01:35:05.125940 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 6 01:35:05.144156 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 6 01:35:05.157416 udevadm[1187]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 6 01:35:05.163015 kernel: loop1: detected capacity change from 0 to 228704 Mar 6 01:35:05.173738 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Mar 6 01:35:05.173769 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Mar 6 01:35:05.181662 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 6 01:35:05.224969 kernel: loop2: detected capacity change from 0 to 140768 Mar 6 01:35:05.278967 kernel: loop3: detected capacity change from 0 to 142488 Mar 6 01:35:05.357027 kernel: loop4: detected capacity change from 0 to 228704 Mar 6 01:35:05.375931 kernel: loop5: detected capacity change from 0 to 140768 Mar 6 01:35:05.416522 (sd-merge)[1191]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 6 01:35:05.417409 (sd-merge)[1191]: Merged extensions into '/usr'. Mar 6 01:35:05.425524 systemd[1]: Reloading requested from client PID 1167 ('systemd-sysext') (unit systemd-sysext.service)... Mar 6 01:35:05.425585 systemd[1]: Reloading... Mar 6 01:35:05.944949 zram_generator::config[1215]: No configuration found. Mar 6 01:35:06.342157 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 6 01:35:06.538506 ldconfig[1162]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 6 01:35:06.592043 systemd[1]: Reloading finished in 1165 ms. Mar 6 01:35:06.636877 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 6 01:35:06.640501 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 6 01:35:06.668380 systemd[1]: Starting ensure-sysext.service... Mar 6 01:35:06.675156 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 6 01:35:06.693247 systemd[1]: Reloading requested from client PID 1254 ('systemctl') (unit ensure-sysext.service)... Mar 6 01:35:06.693295 systemd[1]: Reloading... Mar 6 01:35:06.746120 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 6 01:35:06.746548 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 6 01:35:06.747790 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 6 01:35:06.749545 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Mar 6 01:35:06.749635 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Mar 6 01:35:06.758295 systemd-tmpfiles[1255]: Detected autofs mount point /boot during canonicalization of boot. Mar 6 01:35:06.758327 systemd-tmpfiles[1255]: Skipping /boot Mar 6 01:35:06.797728 zram_generator::config[1281]: No configuration found. Mar 6 01:35:06.957549 systemd-tmpfiles[1255]: Detected autofs mount point /boot during canonicalization of boot. Mar 6 01:35:06.957646 systemd-tmpfiles[1255]: Skipping /boot Mar 6 01:35:07.195473 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 6 01:35:07.247088 systemd[1]: Reloading finished in 553 ms. Mar 6 01:35:07.275234 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 6 01:35:07.279096 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 6 01:35:07.304557 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 6 01:35:07.310541 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 6 01:35:07.315548 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 6 01:35:07.321082 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 6 01:35:07.328128 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 6 01:35:07.344193 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 6 01:35:07.359126 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 6 01:35:07.365204 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 01:35:07.365399 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 6 01:35:07.368396 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 6 01:35:07.388272 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 6 01:35:07.396180 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 6 01:35:07.399639 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 6 01:35:07.399794 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 01:35:07.407365 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 6 01:35:07.412618 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 6 01:35:07.413262 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 6 01:35:07.419300 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 01:35:07.420274 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 6 01:35:07.420512 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 6 01:35:07.426989 systemd-udevd[1325]: Using default interface naming scheme 'v255'. Mar 6 01:35:07.431217 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 6 01:35:07.436553 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 01:35:07.438472 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 6 01:35:07.443308 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 6 01:35:07.443547 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 6 01:35:07.448058 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 6 01:35:07.449618 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 6 01:35:07.462222 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 01:35:07.462526 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 6 01:35:07.477312 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 6 01:35:07.479989 augenrules[1351]: No rules Mar 6 01:35:07.520437 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 6 01:35:07.531851 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 6 01:35:07.543256 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 6 01:35:07.547214 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 6 01:35:07.547419 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 01:35:07.550635 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 6 01:35:07.555238 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 6 01:35:07.560816 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 6 01:35:07.570679 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 6 01:35:07.577078 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 6 01:35:07.582242 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 6 01:35:07.582472 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 6 01:35:07.587385 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 6 01:35:07.587614 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 6 01:35:07.595614 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 6 01:35:07.595937 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 6 01:35:07.603353 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 6 01:35:07.603605 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 6 01:35:07.613665 systemd[1]: Finished ensure-sysext.service. Mar 6 01:35:07.656224 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 6 01:35:07.662447 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 6 01:35:07.662603 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 6 01:35:08.060328 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 6 01:35:08.073161 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 6 01:35:08.077936 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 6 01:35:08.087471 systemd-resolved[1324]: Positive Trust Anchors: Mar 6 01:35:08.115218 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1380) Mar 6 01:35:08.087533 systemd-resolved[1324]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 6 01:35:08.087561 systemd-resolved[1324]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 6 01:35:08.101737 systemd-resolved[1324]: Defaulting to hostname 'linux'. Mar 6 01:35:08.448927 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 6 01:35:08.469401 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 6 01:35:08.501999 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 6 01:35:08.512987 kernel: ACPI: button: Power Button [PWRF] Mar 6 01:35:08.524858 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 6 01:35:08.537831 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 6 01:35:08.546818 systemd-networkd[1392]: lo: Link UP Mar 6 01:35:08.547349 systemd-networkd[1392]: lo: Gained carrier Mar 6 01:35:08.557105 systemd-networkd[1392]: Enumeration completed Mar 6 01:35:08.561125 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 6 01:35:08.567082 systemd[1]: Reached target network.target - Network. Mar 6 01:35:08.588995 systemd-networkd[1392]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 01:35:08.589023 systemd-networkd[1392]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 6 01:35:08.590508 systemd-networkd[1392]: eth0: Link UP Mar 6 01:35:08.590521 systemd-networkd[1392]: eth0: Gained carrier Mar 6 01:35:08.590550 systemd-networkd[1392]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 01:35:08.644489 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 6 01:35:08.748601 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 6 01:35:08.783369 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 6 01:35:08.752412 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 6 01:35:08.765081 systemd-networkd[1392]: eth0: DHCPv4 address 10.0.0.76/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 6 01:35:08.783641 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 6 01:35:08.796212 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 6 01:35:08.799805 systemd-timesyncd[1394]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 6 01:35:08.800272 systemd-timesyncd[1394]: Initial clock synchronization to Fri 2026-03-06 01:35:08.677250 UTC. Mar 6 01:35:08.804367 systemd[1]: Reached target time-set.target - System Time Set. Mar 6 01:35:08.845992 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 6 01:35:08.857363 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 01:35:08.949141 kernel: mousedev: PS/2 mouse device common for all mice Mar 6 01:35:08.975954 kernel: kvm_amd: TSC scaling supported Mar 6 01:35:08.976064 kernel: kvm_amd: Nested Virtualization enabled Mar 6 01:35:08.976081 kernel: kvm_amd: Nested Paging enabled Mar 6 01:35:08.978616 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 6 01:35:08.978657 kernel: kvm_amd: PMU virtualization is disabled Mar 6 01:35:09.039978 kernel: EDAC MC: Ver: 3.0.0 Mar 6 01:35:09.073489 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 6 01:35:09.188706 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 01:35:09.206685 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 6 01:35:09.232015 lvm[1419]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 6 01:35:09.280994 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 6 01:35:09.287055 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 6 01:35:09.291624 systemd[1]: Reached target sysinit.target - System Initialization. Mar 6 01:35:09.296593 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 6 01:35:09.301414 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 6 01:35:09.307065 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 6 01:35:09.311625 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 6 01:35:09.316434 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 6 01:35:09.320770 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 6 01:35:09.320857 systemd[1]: Reached target paths.target - Path Units. Mar 6 01:35:09.324285 systemd[1]: Reached target timers.target - Timer Units. Mar 6 01:35:09.329133 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 6 01:35:09.335693 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 6 01:35:09.352361 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 6 01:35:09.357596 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 6 01:35:09.362696 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 6 01:35:09.366384 systemd[1]: Reached target sockets.target - Socket Units. Mar 6 01:35:09.369317 systemd[1]: Reached target basic.target - Basic System. Mar 6 01:35:09.371824 lvm[1424]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 6 01:35:09.373269 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 6 01:35:09.373337 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 6 01:35:09.375381 systemd[1]: Starting containerd.service - containerd container runtime... Mar 6 01:35:09.380097 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 6 01:35:09.385849 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 6 01:35:09.393751 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 6 01:35:09.396950 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 6 01:35:09.408364 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 6 01:35:09.414333 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 6 01:35:09.414567 jq[1427]: false Mar 6 01:35:09.419995 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 6 01:35:09.426686 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 6 01:35:09.433722 dbus-daemon[1426]: [system] SELinux support is enabled Mar 6 01:35:09.436070 extend-filesystems[1428]: Found loop3 Mar 6 01:35:09.436070 extend-filesystems[1428]: Found loop4 Mar 6 01:35:09.436070 extend-filesystems[1428]: Found loop5 Mar 6 01:35:09.436070 extend-filesystems[1428]: Found sr0 Mar 6 01:35:09.436070 extend-filesystems[1428]: Found vda Mar 6 01:35:09.436070 extend-filesystems[1428]: Found vda1 Mar 6 01:35:09.436070 extend-filesystems[1428]: Found vda2 Mar 6 01:35:09.436070 extend-filesystems[1428]: Found vda3 Mar 6 01:35:09.436070 extend-filesystems[1428]: Found usr Mar 6 01:35:09.436070 extend-filesystems[1428]: Found vda4 Mar 6 01:35:09.436070 extend-filesystems[1428]: Found vda6 Mar 6 01:35:09.436070 extend-filesystems[1428]: Found vda7 Mar 6 01:35:09.436070 extend-filesystems[1428]: Found vda9 Mar 6 01:35:09.436070 extend-filesystems[1428]: Checking size of /dev/vda9 Mar 6 01:35:09.439226 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 6 01:35:09.651719 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 6 01:35:09.666595 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1374) Mar 6 01:35:09.666735 extend-filesystems[1428]: Resized partition /dev/vda9 Mar 6 01:35:09.449144 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 6 01:35:09.682781 extend-filesystems[1449]: resize2fs 1.47.1 (20-May-2024) Mar 6 01:35:09.692747 update_engine[1442]: I20260306 01:35:09.582626 1442 main.cc:92] Flatcar Update Engine starting Mar 6 01:35:09.692747 update_engine[1442]: I20260306 01:35:09.588485 1442 update_check_scheduler.cc:74] Next update check in 10m26s Mar 6 01:35:09.450338 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 6 01:35:09.470530 systemd[1]: Starting update-engine.service - Update Engine... Mar 6 01:35:09.694804 jq[1445]: true Mar 6 01:35:09.485989 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 6 01:35:09.495507 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 6 01:35:09.522149 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 6 01:35:09.572637 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 6 01:35:09.573129 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 6 01:35:09.588378 systemd[1]: motdgen.service: Deactivated successfully. Mar 6 01:35:09.591418 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 6 01:35:09.632841 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 6 01:35:09.633241 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 6 01:35:09.728041 tar[1451]: linux-amd64/LICENSE Mar 6 01:35:09.732218 tar[1451]: linux-amd64/helm Mar 6 01:35:09.731925 (ntainerd)[1459]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 6 01:35:09.738404 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 6 01:35:09.744382 jq[1457]: true Mar 6 01:35:09.756089 systemd[1]: Started update-engine.service - Update Engine. Mar 6 01:35:09.763967 extend-filesystems[1449]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 6 01:35:09.763967 extend-filesystems[1449]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 6 01:35:09.763967 extend-filesystems[1449]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 6 01:35:09.780693 extend-filesystems[1428]: Resized filesystem in /dev/vda9 Mar 6 01:35:09.779640 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 6 01:35:09.783864 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 6 01:35:09.785646 systemd-logind[1436]: Watching system buttons on /dev/input/event1 (Power Button) Mar 6 01:35:09.785677 systemd-logind[1436]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 6 01:35:09.795022 systemd-logind[1436]: New seat seat0. Mar 6 01:35:09.804616 systemd[1]: Started systemd-logind.service - User Login Management. Mar 6 01:35:09.819924 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 6 01:35:09.821665 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 6 01:35:09.826831 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 6 01:35:09.827011 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 6 01:35:09.842723 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 6 01:35:10.087770 systemd-networkd[1392]: eth0: Gained IPv6LL Mar 6 01:35:10.124396 sshd_keygen[1448]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 6 01:35:10.139077 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 6 01:35:10.153767 systemd[1]: Reached target network-online.target - Network is Online. Mar 6 01:35:10.303445 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 6 01:35:10.316983 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:35:10.322455 bash[1489]: Updated "/home/core/.ssh/authorized_keys" Mar 6 01:35:10.333170 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 6 01:35:10.340506 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 6 01:35:10.350133 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 6 01:35:10.375437 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 6 01:35:10.378431 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 6 01:35:10.379173 locksmithd[1475]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 6 01:35:10.392551 systemd[1]: issuegen.service: Deactivated successfully. Mar 6 01:35:10.392979 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 6 01:35:10.764007 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 6 01:35:10.768416 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 6 01:35:10.778850 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 6 01:35:10.779159 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 6 01:35:10.785824 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 6 01:35:10.789245 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 6 01:35:10.802286 systemd[1]: Started sshd@0-10.0.0.76:22-10.0.0.1:33164.service - OpenSSH per-connection server daemon (10.0.0.1:33164). Mar 6 01:35:10.868606 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 6 01:35:10.883501 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 6 01:35:10.893416 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 6 01:35:10.899087 systemd[1]: Reached target getty.target - Login Prompts. Mar 6 01:35:10.964630 sshd[1520]: Accepted publickey for core from 10.0.0.1 port 33164 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:35:10.971440 sshd[1520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:35:10.984962 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 6 01:35:10.998558 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 6 01:35:11.009203 systemd-logind[1436]: New session 1 of user core. Mar 6 01:35:11.078100 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 6 01:35:11.091233 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 6 01:35:11.105130 (systemd)[1531]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 6 01:35:12.210704 containerd[1459]: time="2026-03-06T01:35:12.210355451Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 6 01:35:12.324155 containerd[1459]: time="2026-03-06T01:35:12.323820858Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 6 01:35:12.348269 systemd[1531]: Queued start job for default target default.target. Mar 6 01:35:12.356372 systemd[1531]: Created slice app.slice - User Application Slice. Mar 6 01:35:12.356585 containerd[1459]: time="2026-03-06T01:35:12.349356951Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 6 01:35:12.356585 containerd[1459]: time="2026-03-06T01:35:12.349511751Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 6 01:35:12.356585 containerd[1459]: time="2026-03-06T01:35:12.349550890Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 6 01:35:12.356585 containerd[1459]: time="2026-03-06T01:35:12.349851554Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 6 01:35:12.356585 containerd[1459]: time="2026-03-06T01:35:12.349943147Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 6 01:35:12.356585 containerd[1459]: time="2026-03-06T01:35:12.350123977Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 6 01:35:12.356585 containerd[1459]: time="2026-03-06T01:35:12.350148459Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 6 01:35:12.356585 containerd[1459]: time="2026-03-06T01:35:12.350466020Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 6 01:35:12.356585 containerd[1459]: time="2026-03-06T01:35:12.350508778Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 6 01:35:12.356585 containerd[1459]: time="2026-03-06T01:35:12.350535750Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 6 01:35:12.356585 containerd[1459]: time="2026-03-06T01:35:12.350546796Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 6 01:35:12.356410 systemd[1531]: Reached target paths.target - Paths. Mar 6 01:35:12.357096 containerd[1459]: time="2026-03-06T01:35:12.350680137Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 6 01:35:12.357096 containerd[1459]: time="2026-03-06T01:35:12.351288009Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 6 01:35:12.357096 containerd[1459]: time="2026-03-06T01:35:12.351599353Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 6 01:35:12.357096 containerd[1459]: time="2026-03-06T01:35:12.351625274Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 6 01:35:12.357096 containerd[1459]: time="2026-03-06T01:35:12.351768422Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 6 01:35:12.357096 containerd[1459]: time="2026-03-06T01:35:12.351987893Z" level=info msg="metadata content store policy set" policy=shared Mar 6 01:35:12.356431 systemd[1531]: Reached target timers.target - Timers. Mar 6 01:35:12.359978 systemd[1531]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 6 01:35:12.367871 containerd[1459]: time="2026-03-06T01:35:12.367736471Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 6 01:35:12.368062 containerd[1459]: time="2026-03-06T01:35:12.367956498Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 6 01:35:12.368062 containerd[1459]: time="2026-03-06T01:35:12.367989428Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 6 01:35:12.368062 containerd[1459]: time="2026-03-06T01:35:12.368016529Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 6 01:35:12.368062 containerd[1459]: time="2026-03-06T01:35:12.368042182Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 6 01:35:12.368468 containerd[1459]: time="2026-03-06T01:35:12.368401172Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 6 01:35:12.369215 containerd[1459]: time="2026-03-06T01:35:12.369142117Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 6 01:35:12.369584 containerd[1459]: time="2026-03-06T01:35:12.369519462Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 6 01:35:12.369630 containerd[1459]: time="2026-03-06T01:35:12.369580604Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 6 01:35:12.369630 containerd[1459]: time="2026-03-06T01:35:12.369606704Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 6 01:35:12.369715 containerd[1459]: time="2026-03-06T01:35:12.369630392Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 6 01:35:12.369715 containerd[1459]: time="2026-03-06T01:35:12.369699329Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 6 01:35:12.369829 containerd[1459]: time="2026-03-06T01:35:12.369792172Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 6 01:35:12.369983 containerd[1459]: time="2026-03-06T01:35:12.369859116Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 6 01:35:12.370077 containerd[1459]: time="2026-03-06T01:35:12.370021065Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 6 01:35:12.375307 containerd[1459]: time="2026-03-06T01:35:12.375220943Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 6 01:35:12.375394 containerd[1459]: time="2026-03-06T01:35:12.375307689Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 6 01:35:12.375394 containerd[1459]: time="2026-03-06T01:35:12.375372559Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 6 01:35:12.375589 containerd[1459]: time="2026-03-06T01:35:12.375531869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 6 01:35:12.375652 containerd[1459]: time="2026-03-06T01:35:12.375586656Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 6 01:35:12.375652 containerd[1459]: time="2026-03-06T01:35:12.375613756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 6 01:35:12.375652 containerd[1459]: time="2026-03-06T01:35:12.375637377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 6 01:35:12.375808 containerd[1459]: time="2026-03-06T01:35:12.375655424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 6 01:35:12.375808 containerd[1459]: time="2026-03-06T01:35:12.375728069Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 6 01:35:12.375808 containerd[1459]: time="2026-03-06T01:35:12.375756508Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 6 01:35:12.375932 containerd[1459]: time="2026-03-06T01:35:12.375810213Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 6 01:35:12.375932 containerd[1459]: time="2026-03-06T01:35:12.375836332Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 6 01:35:12.376000 containerd[1459]: time="2026-03-06T01:35:12.375940480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 6 01:35:12.376000 containerd[1459]: time="2026-03-06T01:35:12.375973153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 6 01:35:12.376079 containerd[1459]: time="2026-03-06T01:35:12.376014106Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 6 01:35:12.376079 containerd[1459]: time="2026-03-06T01:35:12.376050340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 6 01:35:12.376241 containerd[1459]: time="2026-03-06T01:35:12.376173625Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 6 01:35:12.376384 containerd[1459]: time="2026-03-06T01:35:12.376333948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 6 01:35:12.376432 containerd[1459]: time="2026-03-06T01:35:12.376385700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 6 01:35:12.376466 containerd[1459]: time="2026-03-06T01:35:12.376431293Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 6 01:35:12.376799 containerd[1459]: time="2026-03-06T01:35:12.376738760Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 6 01:35:12.376853 containerd[1459]: time="2026-03-06T01:35:12.376829600Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 6 01:35:12.376987 containerd[1459]: time="2026-03-06T01:35:12.376854113Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 6 01:35:12.376987 containerd[1459]: time="2026-03-06T01:35:12.376871674Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 6 01:35:12.376987 containerd[1459]: time="2026-03-06T01:35:12.376966660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 6 01:35:12.377060 containerd[1459]: time="2026-03-06T01:35:12.377036389Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 6 01:35:12.377123 containerd[1459]: time="2026-03-06T01:35:12.377082102Z" level=info msg="NRI interface is disabled by configuration." Mar 6 01:35:12.377151 containerd[1459]: time="2026-03-06T01:35:12.377124493Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 6 01:35:12.378095 containerd[1459]: time="2026-03-06T01:35:12.377982091Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 6 01:35:12.378095 containerd[1459]: time="2026-03-06T01:35:12.378110771Z" level=info msg="Connect containerd service" Mar 6 01:35:12.379166 containerd[1459]: time="2026-03-06T01:35:12.378298680Z" level=info msg="using legacy CRI server" Mar 6 01:35:12.379166 containerd[1459]: time="2026-03-06T01:35:12.378314288Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 6 01:35:12.379166 containerd[1459]: time="2026-03-06T01:35:12.378795493Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 6 01:35:12.380857 containerd[1459]: time="2026-03-06T01:35:12.380789001Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 6 01:35:12.382252 containerd[1459]: time="2026-03-06T01:35:12.382203523Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 6 01:35:12.382407 containerd[1459]: time="2026-03-06T01:35:12.382327255Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 6 01:35:12.382756 containerd[1459]: time="2026-03-06T01:35:12.382513473Z" level=info msg="Start subscribing containerd event" Mar 6 01:35:12.382815 containerd[1459]: time="2026-03-06T01:35:12.382791161Z" level=info msg="Start recovering state" Mar 6 01:35:12.385981 containerd[1459]: time="2026-03-06T01:35:12.384546329Z" level=info msg="Start event monitor" Mar 6 01:35:12.385981 containerd[1459]: time="2026-03-06T01:35:12.385474052Z" level=info msg="Start snapshots syncer" Mar 6 01:35:12.385981 containerd[1459]: time="2026-03-06T01:35:12.385517753Z" level=info msg="Start cni network conf syncer for default" Mar 6 01:35:12.385981 containerd[1459]: time="2026-03-06T01:35:12.385535622Z" level=info msg="Start streaming server" Mar 6 01:35:12.385981 containerd[1459]: time="2026-03-06T01:35:12.385809125Z" level=info msg="containerd successfully booted in 0.177967s" Mar 6 01:35:12.387186 systemd[1]: Started containerd.service - containerd container runtime. Mar 6 01:35:12.401071 systemd[1531]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 6 01:35:12.401267 systemd[1531]: Reached target sockets.target - Sockets. Mar 6 01:35:12.401290 systemd[1531]: Reached target basic.target - Basic System. Mar 6 01:35:12.401355 systemd[1531]: Reached target default.target - Main User Target. Mar 6 01:35:12.401410 systemd[1531]: Startup finished in 1.286s. Mar 6 01:35:12.402580 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 6 01:35:12.444480 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 6 01:35:12.527210 systemd[1]: Started sshd@1-10.0.0.76:22-10.0.0.1:42654.service - OpenSSH per-connection server daemon (10.0.0.1:42654). Mar 6 01:35:12.621841 sshd[1547]: Accepted publickey for core from 10.0.0.1 port 42654 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:35:12.625797 sshd[1547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:35:12.636806 systemd-logind[1436]: New session 2 of user core. Mar 6 01:35:12.640171 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 6 01:35:12.650596 tar[1451]: linux-amd64/README.md Mar 6 01:35:12.671187 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 6 01:35:12.720359 sshd[1547]: pam_unix(sshd:session): session closed for user core Mar 6 01:35:12.755056 systemd[1]: sshd@1-10.0.0.76:22-10.0.0.1:42654.service: Deactivated successfully. Mar 6 01:35:12.759860 systemd[1]: session-2.scope: Deactivated successfully. Mar 6 01:35:12.762424 systemd-logind[1436]: Session 2 logged out. Waiting for processes to exit. Mar 6 01:35:12.769472 systemd[1]: Started sshd@2-10.0.0.76:22-10.0.0.1:42660.service - OpenSSH per-connection server daemon (10.0.0.1:42660). Mar 6 01:35:12.775092 systemd-logind[1436]: Removed session 2. Mar 6 01:35:12.809297 sshd[1557]: Accepted publickey for core from 10.0.0.1 port 42660 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:35:12.812129 sshd[1557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:35:12.819940 systemd-logind[1436]: New session 3 of user core. Mar 6 01:35:12.827259 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 6 01:35:13.284224 sshd[1557]: pam_unix(sshd:session): session closed for user core Mar 6 01:35:13.290109 systemd[1]: sshd@2-10.0.0.76:22-10.0.0.1:42660.service: Deactivated successfully. Mar 6 01:35:13.293651 systemd[1]: session-3.scope: Deactivated successfully. Mar 6 01:35:13.294697 systemd-logind[1436]: Session 3 logged out. Waiting for processes to exit. Mar 6 01:35:13.296697 systemd-logind[1436]: Removed session 3. Mar 6 01:35:15.197512 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:35:15.205383 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 6 01:35:15.210009 systemd[1]: Startup finished in 2.197s (kernel) + 6.376s (initrd) + 12.066s (userspace) = 20.641s. Mar 6 01:35:15.279965 (kubelet)[1568]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 01:35:16.740686 kubelet[1568]: E0306 01:35:16.740031 1568 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 01:35:16.745610 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 01:35:16.745998 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 01:35:16.746622 systemd[1]: kubelet.service: Consumed 5.484s CPU time. Mar 6 01:35:23.251308 systemd[1]: Started sshd@3-10.0.0.76:22-10.0.0.1:42634.service - OpenSSH per-connection server daemon (10.0.0.1:42634). Mar 6 01:35:23.301305 sshd[1582]: Accepted publickey for core from 10.0.0.1 port 42634 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:35:23.304382 sshd[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:35:23.311446 systemd-logind[1436]: New session 4 of user core. Mar 6 01:35:23.325175 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 6 01:35:23.389866 sshd[1582]: pam_unix(sshd:session): session closed for user core Mar 6 01:35:23.407754 systemd[1]: sshd@3-10.0.0.76:22-10.0.0.1:42634.service: Deactivated successfully. Mar 6 01:35:23.410728 systemd[1]: session-4.scope: Deactivated successfully. Mar 6 01:35:23.413199 systemd-logind[1436]: Session 4 logged out. Waiting for processes to exit. Mar 6 01:35:23.424543 systemd[1]: Started sshd@4-10.0.0.76:22-10.0.0.1:42642.service - OpenSSH per-connection server daemon (10.0.0.1:42642). Mar 6 01:35:23.426274 systemd-logind[1436]: Removed session 4. Mar 6 01:35:23.466725 sshd[1589]: Accepted publickey for core from 10.0.0.1 port 42642 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:35:23.469142 sshd[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:35:23.476095 systemd-logind[1436]: New session 5 of user core. Mar 6 01:35:23.495280 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 6 01:35:23.552441 sshd[1589]: pam_unix(sshd:session): session closed for user core Mar 6 01:35:23.563581 systemd[1]: sshd@4-10.0.0.76:22-10.0.0.1:42642.service: Deactivated successfully. Mar 6 01:35:23.565863 systemd[1]: session-5.scope: Deactivated successfully. Mar 6 01:35:23.567957 systemd-logind[1436]: Session 5 logged out. Waiting for processes to exit. Mar 6 01:35:23.577615 systemd[1]: Started sshd@5-10.0.0.76:22-10.0.0.1:42648.service - OpenSSH per-connection server daemon (10.0.0.1:42648). Mar 6 01:35:23.579266 systemd-logind[1436]: Removed session 5. Mar 6 01:35:23.619826 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 42648 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:35:23.622203 sshd[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:35:23.628803 systemd-logind[1436]: New session 6 of user core. Mar 6 01:35:23.638113 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 6 01:35:23.702771 sshd[1596]: pam_unix(sshd:session): session closed for user core Mar 6 01:35:23.711116 systemd[1]: sshd@5-10.0.0.76:22-10.0.0.1:42648.service: Deactivated successfully. Mar 6 01:35:23.713576 systemd[1]: session-6.scope: Deactivated successfully. Mar 6 01:35:23.716004 systemd-logind[1436]: Session 6 logged out. Waiting for processes to exit. Mar 6 01:35:23.728442 systemd[1]: Started sshd@6-10.0.0.76:22-10.0.0.1:42662.service - OpenSSH per-connection server daemon (10.0.0.1:42662). Mar 6 01:35:23.730020 systemd-logind[1436]: Removed session 6. Mar 6 01:35:23.775312 sshd[1603]: Accepted publickey for core from 10.0.0.1 port 42662 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:35:23.777801 sshd[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:35:23.783831 systemd-logind[1436]: New session 7 of user core. Mar 6 01:35:23.802421 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 6 01:35:23.879574 sudo[1606]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 6 01:35:23.880264 sudo[1606]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 01:35:23.899719 sudo[1606]: pam_unix(sudo:session): session closed for user root Mar 6 01:35:23.903515 sshd[1603]: pam_unix(sshd:session): session closed for user core Mar 6 01:35:23.914270 systemd[1]: sshd@6-10.0.0.76:22-10.0.0.1:42662.service: Deactivated successfully. Mar 6 01:35:23.916621 systemd[1]: session-7.scope: Deactivated successfully. Mar 6 01:35:23.919372 systemd-logind[1436]: Session 7 logged out. Waiting for processes to exit. Mar 6 01:35:23.931506 systemd[1]: Started sshd@7-10.0.0.76:22-10.0.0.1:42670.service - OpenSSH per-connection server daemon (10.0.0.1:42670). Mar 6 01:35:23.933289 systemd-logind[1436]: Removed session 7. Mar 6 01:35:23.967954 sshd[1611]: Accepted publickey for core from 10.0.0.1 port 42670 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:35:23.971083 sshd[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:35:23.977334 systemd-logind[1436]: New session 8 of user core. Mar 6 01:35:23.987093 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 6 01:35:24.051527 sudo[1615]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 6 01:35:24.052165 sudo[1615]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 01:35:24.061417 sudo[1615]: pam_unix(sudo:session): session closed for user root Mar 6 01:35:24.071026 sudo[1614]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 6 01:35:24.071505 sudo[1614]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 01:35:24.167544 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 6 01:35:24.172293 auditctl[1618]: No rules Mar 6 01:35:24.173774 systemd[1]: audit-rules.service: Deactivated successfully. Mar 6 01:35:24.174224 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 6 01:35:24.177304 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 6 01:35:24.226270 augenrules[1636]: No rules Mar 6 01:35:24.228214 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 6 01:35:24.229811 sudo[1614]: pam_unix(sudo:session): session closed for user root Mar 6 01:35:24.232230 sshd[1611]: pam_unix(sshd:session): session closed for user core Mar 6 01:35:24.245589 systemd[1]: sshd@7-10.0.0.76:22-10.0.0.1:42670.service: Deactivated successfully. Mar 6 01:35:24.247817 systemd[1]: session-8.scope: Deactivated successfully. Mar 6 01:35:24.249574 systemd-logind[1436]: Session 8 logged out. Waiting for processes to exit. Mar 6 01:35:24.266463 systemd[1]: Started sshd@8-10.0.0.76:22-10.0.0.1:42680.service - OpenSSH per-connection server daemon (10.0.0.1:42680). Mar 6 01:35:24.268335 systemd-logind[1436]: Removed session 8. Mar 6 01:35:24.313060 sshd[1644]: Accepted publickey for core from 10.0.0.1 port 42680 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:35:24.315168 sshd[1644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:35:24.321305 systemd-logind[1436]: New session 9 of user core. Mar 6 01:35:24.335119 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 6 01:35:24.397370 sudo[1647]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 6 01:35:24.398126 sudo[1647]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 01:35:25.125233 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 6 01:35:25.125634 (dockerd)[1665]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 6 01:35:26.045136 dockerd[1665]: time="2026-03-06T01:35:26.044593233Z" level=info msg="Starting up" Mar 6 01:35:26.398510 dockerd[1665]: time="2026-03-06T01:35:26.398095688Z" level=info msg="Loading containers: start." Mar 6 01:35:26.652084 kernel: Initializing XFRM netlink socket Mar 6 01:35:26.773624 systemd-networkd[1392]: docker0: Link UP Mar 6 01:35:26.797108 dockerd[1665]: time="2026-03-06T01:35:26.796984880Z" level=info msg="Loading containers: done." Mar 6 01:35:26.827427 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1801919736-merged.mount: Deactivated successfully. Mar 6 01:35:26.828479 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 6 01:35:26.830462 dockerd[1665]: time="2026-03-06T01:35:26.830388740Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 6 01:35:26.830590 dockerd[1665]: time="2026-03-06T01:35:26.830549106Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 6 01:35:26.830764 dockerd[1665]: time="2026-03-06T01:35:26.830704021Z" level=info msg="Daemon has completed initialization" Mar 6 01:35:26.837335 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:35:26.883576 dockerd[1665]: time="2026-03-06T01:35:26.883289311Z" level=info msg="API listen on /run/docker.sock" Mar 6 01:35:26.884844 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 6 01:35:27.645569 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:35:27.676566 (kubelet)[1819]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 01:35:28.008223 kubelet[1819]: E0306 01:35:28.007793 1819 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 01:35:28.017022 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 01:35:28.017262 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 01:35:28.017836 systemd[1]: kubelet.service: Consumed 1.017s CPU time. Mar 6 01:35:28.174372 containerd[1459]: time="2026-03-06T01:35:28.174303888Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 6 01:35:28.944173 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2906378043.mount: Deactivated successfully. Mar 6 01:35:30.585689 containerd[1459]: time="2026-03-06T01:35:30.585593382Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:35:30.586733 containerd[1459]: time="2026-03-06T01:35:30.586170538Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=30116186" Mar 6 01:35:30.588406 containerd[1459]: time="2026-03-06T01:35:30.588326277Z" level=info msg="ImageCreate event name:\"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:35:30.593669 containerd[1459]: time="2026-03-06T01:35:30.593574279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:35:30.595090 containerd[1459]: time="2026-03-06T01:35:30.595042282Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"30112785\" in 2.420690148s" Mar 6 01:35:30.595180 containerd[1459]: time="2026-03-06T01:35:30.595115209Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\"" Mar 6 01:35:30.597298 containerd[1459]: time="2026-03-06T01:35:30.597243900Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 6 01:35:31.970355 containerd[1459]: time="2026-03-06T01:35:31.970296733Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:35:31.971773 containerd[1459]: time="2026-03-06T01:35:31.971663744Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=26021810" Mar 6 01:35:31.973043 containerd[1459]: time="2026-03-06T01:35:31.972967896Z" level=info msg="ImageCreate event name:\"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:35:31.977010 containerd[1459]: time="2026-03-06T01:35:31.976940609Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:35:31.978380 containerd[1459]: time="2026-03-06T01:35:31.978290147Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"27678758\" in 1.380997761s" Mar 6 01:35:31.978495 containerd[1459]: time="2026-03-06T01:35:31.978362274Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\"" Mar 6 01:35:31.979059 containerd[1459]: time="2026-03-06T01:35:31.979022151Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 6 01:35:33.280046 containerd[1459]: time="2026-03-06T01:35:33.279883684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:35:33.280953 containerd[1459]: time="2026-03-06T01:35:33.280848590Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=20162746" Mar 6 01:35:33.282439 containerd[1459]: time="2026-03-06T01:35:33.282349346Z" level=info msg="ImageCreate event name:\"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:35:33.286432 containerd[1459]: time="2026-03-06T01:35:33.286366447Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:35:33.287686 containerd[1459]: time="2026-03-06T01:35:33.287634584Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"21819712\" in 1.308568856s" Mar 6 01:35:33.287686 containerd[1459]: time="2026-03-06T01:35:33.287677989Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\"" Mar 6 01:35:33.288490 containerd[1459]: time="2026-03-06T01:35:33.288431480Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 6 01:35:34.311123 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1552917720.mount: Deactivated successfully. Mar 6 01:35:34.698833 containerd[1459]: time="2026-03-06T01:35:34.698747236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:35:34.699733 containerd[1459]: time="2026-03-06T01:35:34.699658874Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=31828647" Mar 6 01:35:34.701360 containerd[1459]: time="2026-03-06T01:35:34.701258579Z" level=info msg="ImageCreate event name:\"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:35:34.704235 containerd[1459]: time="2026-03-06T01:35:34.704181382Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:35:34.705191 containerd[1459]: time="2026-03-06T01:35:34.705125044Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"31827666\" in 1.416644642s" Mar 6 01:35:34.705191 containerd[1459]: time="2026-03-06T01:35:34.705184924Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\"" Mar 6 01:35:34.706038 containerd[1459]: time="2026-03-06T01:35:34.706006892Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 6 01:35:35.183491 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount155439074.mount: Deactivated successfully. Mar 6 01:35:36.277598 containerd[1459]: time="2026-03-06T01:35:36.277454489Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:35:36.278568 containerd[1459]: time="2026-03-06T01:35:36.278512518Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Mar 6 01:35:36.281218 containerd[1459]: time="2026-03-06T01:35:36.281099248Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:35:36.286438 containerd[1459]: time="2026-03-06T01:35:36.286386355Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:35:36.289111 containerd[1459]: time="2026-03-06T01:35:36.289042823Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.582865314s" Mar 6 01:35:36.289264 containerd[1459]: time="2026-03-06T01:35:36.289104092Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Mar 6 01:35:36.290072 containerd[1459]: time="2026-03-06T01:35:36.290010276Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 6 01:35:37.414844 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2665959127.mount: Deactivated successfully. Mar 6 01:35:37.423244 containerd[1459]: time="2026-03-06T01:35:37.423147725Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:35:37.424817 containerd[1459]: time="2026-03-06T01:35:37.424731016Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 6 01:35:37.426054 containerd[1459]: time="2026-03-06T01:35:37.426002368Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:35:37.429985 containerd[1459]: time="2026-03-06T01:35:37.429877556Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:35:37.430848 containerd[1459]: time="2026-03-06T01:35:37.430742353Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.140674344s" Mar 6 01:35:37.430848 containerd[1459]: time="2026-03-06T01:35:37.430785698Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 6 01:35:37.435204 containerd[1459]: time="2026-03-06T01:35:37.435158240Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 6 01:35:38.165706 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 6 01:35:38.198472 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:35:40.688815 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2758340012.mount: Deactivated successfully. Mar 6 01:35:41.398996 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:35:41.403478 (kubelet)[1984]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 01:35:42.206725 kubelet[1984]: E0306 01:35:42.206375 1984 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 01:35:42.213014 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 01:35:42.213299 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 01:35:42.213776 systemd[1]: kubelet.service: Consumed 3.614s CPU time. Mar 6 01:35:47.152273 containerd[1459]: time="2026-03-06T01:35:47.151576562Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:35:47.152273 containerd[1459]: time="2026-03-06T01:35:47.152240153Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718840" Mar 6 01:35:47.155172 containerd[1459]: time="2026-03-06T01:35:47.154968580Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:35:47.162255 containerd[1459]: time="2026-03-06T01:35:47.162176708Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:35:47.164618 containerd[1459]: time="2026-03-06T01:35:47.164526330Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 9.729288791s" Mar 6 01:35:47.164713 containerd[1459]: time="2026-03-06T01:35:47.164666185Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Mar 6 01:35:51.217749 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:35:51.218036 systemd[1]: kubelet.service: Consumed 3.614s CPU time. Mar 6 01:35:51.234228 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:35:51.268362 systemd[1]: Reloading requested from client PID 2075 ('systemctl') (unit session-9.scope)... Mar 6 01:35:51.268411 systemd[1]: Reloading... Mar 6 01:35:51.377988 zram_generator::config[2114]: No configuration found. Mar 6 01:35:51.530093 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 6 01:35:51.720828 systemd[1]: Reloading finished in 451 ms. Mar 6 01:35:51.800216 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 6 01:35:51.800369 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 6 01:35:51.800827 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:35:51.814326 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:35:52.179349 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:35:52.186879 (kubelet)[2159]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 6 01:35:52.526811 kubelet[2159]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 6 01:35:52.526811 kubelet[2159]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 6 01:35:52.526811 kubelet[2159]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 6 01:35:52.526811 kubelet[2159]: I0306 01:35:52.524722 2159 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 6 01:35:53.382855 kubelet[2159]: I0306 01:35:53.382616 2159 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 6 01:35:53.382855 kubelet[2159]: I0306 01:35:53.382663 2159 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 6 01:35:53.383840 kubelet[2159]: I0306 01:35:53.383439 2159 server.go:956] "Client rotation is on, will bootstrap in background" Mar 6 01:35:53.521096 kubelet[2159]: E0306 01:35:53.520489 2159 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.76:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.76:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 6 01:35:53.523688 kubelet[2159]: I0306 01:35:53.523493 2159 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 6 01:35:53.571402 kubelet[2159]: E0306 01:35:53.570857 2159 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 6 01:35:53.571402 kubelet[2159]: I0306 01:35:53.570965 2159 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 6 01:35:53.587476 kubelet[2159]: I0306 01:35:53.587415 2159 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 6 01:35:53.588118 kubelet[2159]: I0306 01:35:53.587988 2159 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 6 01:35:53.588540 kubelet[2159]: I0306 01:35:53.588100 2159 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 6 01:35:53.588882 kubelet[2159]: I0306 01:35:53.588581 2159 topology_manager.go:138] "Creating topology manager with none policy" Mar 6 01:35:53.588882 kubelet[2159]: I0306 01:35:53.588593 2159 container_manager_linux.go:303] "Creating device plugin manager" Mar 6 01:35:53.589139 kubelet[2159]: I0306 01:35:53.588972 2159 state_mem.go:36] "Initialized new in-memory state store" Mar 6 01:35:53.596010 kubelet[2159]: I0306 01:35:53.595870 2159 kubelet.go:480] "Attempting to sync node with API server" Mar 6 01:35:53.596010 kubelet[2159]: I0306 01:35:53.595980 2159 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 6 01:35:53.596010 kubelet[2159]: I0306 01:35:53.596013 2159 kubelet.go:386] "Adding apiserver pod source" Mar 6 01:35:53.598330 kubelet[2159]: I0306 01:35:53.598224 2159 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 6 01:35:53.605149 kubelet[2159]: I0306 01:35:53.605058 2159 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 6 01:35:53.606096 kubelet[2159]: I0306 01:35:53.605998 2159 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 6 01:35:53.607384 kubelet[2159]: E0306 01:35:53.607203 2159 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.76:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.76:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 6 01:35:53.607384 kubelet[2159]: E0306 01:35:53.607200 2159 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.76:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.76:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 6 01:35:53.608673 kubelet[2159]: W0306 01:35:53.608638 2159 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 6 01:35:53.617597 kubelet[2159]: I0306 01:35:53.617535 2159 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 6 01:35:53.617691 kubelet[2159]: I0306 01:35:53.617661 2159 server.go:1289] "Started kubelet" Mar 6 01:35:53.619695 kubelet[2159]: I0306 01:35:53.619415 2159 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 6 01:35:53.619833 kubelet[2159]: I0306 01:35:53.619805 2159 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 6 01:35:53.621277 kubelet[2159]: I0306 01:35:53.621238 2159 server.go:317] "Adding debug handlers to kubelet server" Mar 6 01:35:53.622340 kubelet[2159]: I0306 01:35:53.622007 2159 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 6 01:35:53.622574 kubelet[2159]: I0306 01:35:53.622508 2159 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 6 01:35:53.626948 kubelet[2159]: I0306 01:35:53.623649 2159 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 6 01:35:53.626948 kubelet[2159]: E0306 01:35:53.624013 2159 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 01:35:53.626948 kubelet[2159]: I0306 01:35:53.624371 2159 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 6 01:35:53.626948 kubelet[2159]: I0306 01:35:53.624494 2159 reconciler.go:26] "Reconciler: start to sync state" Mar 6 01:35:53.626948 kubelet[2159]: E0306 01:35:53.624863 2159 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.76:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 6 01:35:53.626948 kubelet[2159]: I0306 01:35:53.624939 2159 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 6 01:35:53.626948 kubelet[2159]: E0306 01:35:53.624960 2159 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.76:6443: connect: connection refused" interval="200ms" Mar 6 01:35:53.628199 kubelet[2159]: E0306 01:35:53.625781 2159 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.76:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.76:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189a1cb1e00ed495 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-06 01:35:53.617568917 +0000 UTC m=+1.424302438,LastTimestamp:2026-03-06 01:35:53.617568917 +0000 UTC m=+1.424302438,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 6 01:35:53.633625 kubelet[2159]: I0306 01:35:53.633473 2159 factory.go:223] Registration of the systemd container factory successfully Mar 6 01:35:53.633625 kubelet[2159]: I0306 01:35:53.633583 2159 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 6 01:35:53.636990 kubelet[2159]: I0306 01:35:53.636620 2159 factory.go:223] Registration of the containerd container factory successfully Mar 6 01:35:53.796559 kubelet[2159]: E0306 01:35:53.796138 2159 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 01:35:53.800131 kubelet[2159]: E0306 01:35:53.799981 2159 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 6 01:35:53.817505 kubelet[2159]: I0306 01:35:53.817450 2159 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 6 01:35:53.817505 kubelet[2159]: I0306 01:35:53.817479 2159 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 6 01:35:53.817658 kubelet[2159]: I0306 01:35:53.817526 2159 state_mem.go:36] "Initialized new in-memory state store" Mar 6 01:35:53.824258 kubelet[2159]: I0306 01:35:53.824189 2159 policy_none.go:49] "None policy: Start" Mar 6 01:35:53.824399 kubelet[2159]: I0306 01:35:53.824354 2159 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 6 01:35:53.824505 kubelet[2159]: I0306 01:35:53.824467 2159 state_mem.go:35] "Initializing new in-memory state store" Mar 6 01:35:53.827951 kubelet[2159]: E0306 01:35:53.826259 2159 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.76:6443: connect: connection refused" interval="400ms" Mar 6 01:35:53.835265 kubelet[2159]: I0306 01:35:53.834991 2159 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 6 01:35:53.838706 kubelet[2159]: I0306 01:35:53.837666 2159 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 6 01:35:53.838706 kubelet[2159]: I0306 01:35:53.837749 2159 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 6 01:35:53.838706 kubelet[2159]: I0306 01:35:53.837825 2159 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 6 01:35:53.838706 kubelet[2159]: I0306 01:35:53.837872 2159 kubelet.go:2436] "Starting kubelet main sync loop" Mar 6 01:35:53.838706 kubelet[2159]: E0306 01:35:53.838102 2159 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 6 01:35:53.840428 kubelet[2159]: E0306 01:35:53.839638 2159 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.76:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.76:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 6 01:35:53.844663 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 6 01:35:53.878363 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 6 01:35:53.884604 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 6 01:35:53.894748 kubelet[2159]: E0306 01:35:53.894704 2159 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 6 01:35:53.895250 kubelet[2159]: I0306 01:35:53.895170 2159 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 6 01:35:53.895332 kubelet[2159]: I0306 01:35:53.895257 2159 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 6 01:35:53.899513 kubelet[2159]: I0306 01:35:53.896523 2159 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 6 01:35:54.021204 kubelet[2159]: E0306 01:35:54.020975 2159 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 6 01:35:54.021204 kubelet[2159]: E0306 01:35:54.021242 2159 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 6 01:35:54.021832 kubelet[2159]: I0306 01:35:54.021284 2159 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 6 01:35:54.022672 kubelet[2159]: E0306 01:35:54.022623 2159 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.76:6443/api/v1/nodes\": dial tcp 10.0.0.76:6443: connect: connection refused" node="localhost" Mar 6 01:35:54.035851 systemd[1]: Created slice kubepods-burstable-podf85463a8a572922da957afcb63656a82.slice - libcontainer container kubepods-burstable-podf85463a8a572922da957afcb63656a82.slice. Mar 6 01:35:54.057298 kubelet[2159]: E0306 01:35:54.056759 2159 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:35:54.068506 systemd[1]: Created slice kubepods-burstable-pod8747e1f8a49a618fbc1324a8fe2d3754.slice - libcontainer container kubepods-burstable-pod8747e1f8a49a618fbc1324a8fe2d3754.slice. Mar 6 01:35:54.071630 kubelet[2159]: E0306 01:35:54.071350 2159 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:35:54.073959 systemd[1]: Created slice kubepods-burstable-pode944e4cb17af904786c3a2e01e298498.slice - libcontainer container kubepods-burstable-pode944e4cb17af904786c3a2e01e298498.slice. Mar 6 01:35:54.077172 kubelet[2159]: E0306 01:35:54.077085 2159 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:35:54.088982 kubelet[2159]: I0306 01:35:54.088653 2159 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f85463a8a572922da957afcb63656a82-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f85463a8a572922da957afcb63656a82\") " pod="kube-system/kube-apiserver-localhost" Mar 6 01:35:54.088982 kubelet[2159]: I0306 01:35:54.088716 2159 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f85463a8a572922da957afcb63656a82-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f85463a8a572922da957afcb63656a82\") " pod="kube-system/kube-apiserver-localhost" Mar 6 01:35:54.088982 kubelet[2159]: I0306 01:35:54.088754 2159 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f85463a8a572922da957afcb63656a82-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f85463a8a572922da957afcb63656a82\") " pod="kube-system/kube-apiserver-localhost" Mar 6 01:35:54.194281 kubelet[2159]: I0306 01:35:54.193703 2159 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:35:54.195738 kubelet[2159]: I0306 01:35:54.195651 2159 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:35:54.195843 kubelet[2159]: I0306 01:35:54.195725 2159 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:35:54.196334 kubelet[2159]: I0306 01:35:54.195941 2159 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:35:54.196334 kubelet[2159]: I0306 01:35:54.196247 2159 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:35:54.196334 kubelet[2159]: I0306 01:35:54.196283 2159 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 6 01:35:54.228932 kubelet[2159]: E0306 01:35:54.228539 2159 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.76:6443: connect: connection refused" interval="800ms" Mar 6 01:35:54.231924 kubelet[2159]: I0306 01:35:54.231789 2159 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 6 01:35:54.232483 kubelet[2159]: E0306 01:35:54.232444 2159 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.76:6443/api/v1/nodes\": dial tcp 10.0.0.76:6443: connect: connection refused" node="localhost" Mar 6 01:35:54.368288 kubelet[2159]: E0306 01:35:54.367645 2159 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:35:54.372990 kubelet[2159]: E0306 01:35:54.371767 2159 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:35:54.373377 containerd[1459]: time="2026-03-06T01:35:54.373245514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,}" Mar 6 01:35:54.374397 containerd[1459]: time="2026-03-06T01:35:54.373244965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f85463a8a572922da957afcb63656a82,Namespace:kube-system,Attempt:0,}" Mar 6 01:35:54.381677 kubelet[2159]: E0306 01:35:54.381513 2159 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:35:54.383812 containerd[1459]: time="2026-03-06T01:35:54.383700583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,}" Mar 6 01:35:54.571059 update_engine[1442]: I20260306 01:35:54.570619 1442 update_attempter.cc:509] Updating boot flags... Mar 6 01:35:54.652374 kubelet[2159]: I0306 01:35:54.652288 2159 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 6 01:35:54.653619 kubelet[2159]: E0306 01:35:54.653156 2159 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.76:6443/api/v1/nodes\": dial tcp 10.0.0.76:6443: connect: connection refused" node="localhost" Mar 6 01:35:54.660986 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2204) Mar 6 01:35:54.711543 kubelet[2159]: E0306 01:35:54.711447 2159 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.76:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.76:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 6 01:35:55.004428 kubelet[2159]: E0306 01:35:55.004236 2159 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.76:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.76:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 6 01:35:55.043437 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2206) Mar 6 01:35:55.057019 kubelet[2159]: E0306 01:35:55.036464 2159 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.76:6443: connect: connection refused" interval="1.6s" Mar 6 01:35:55.077239 kubelet[2159]: E0306 01:35:55.068659 2159 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.76:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 6 01:35:55.081956 kubelet[2159]: E0306 01:35:55.081674 2159 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.76:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.76:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 6 01:35:55.269081 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2206) Mar 6 01:35:55.291054 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3412549827.mount: Deactivated successfully. Mar 6 01:35:55.304606 containerd[1459]: time="2026-03-06T01:35:55.304517419Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 01:35:55.307807 containerd[1459]: time="2026-03-06T01:35:55.307689036Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 6 01:35:55.309014 containerd[1459]: time="2026-03-06T01:35:55.308957536Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 01:35:55.474223 containerd[1459]: time="2026-03-06T01:35:55.422344418Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 6 01:35:55.475513 kubelet[2159]: I0306 01:35:55.475340 2159 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 6 01:35:55.476336 kubelet[2159]: E0306 01:35:55.476282 2159 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.76:6443/api/v1/nodes\": dial tcp 10.0.0.76:6443: connect: connection refused" node="localhost" Mar 6 01:35:55.476836 containerd[1459]: time="2026-03-06T01:35:55.476771820Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 01:35:55.479998 containerd[1459]: time="2026-03-06T01:35:55.479876912Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 01:35:55.485659 containerd[1459]: time="2026-03-06T01:35:55.485559619Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 6 01:35:55.487200 containerd[1459]: time="2026-03-06T01:35:55.487067517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 01:35:55.488015 containerd[1459]: time="2026-03-06T01:35:55.487877938Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.114349583s" Mar 6 01:35:55.489863 containerd[1459]: time="2026-03-06T01:35:55.489757789Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.105919803s" Mar 6 01:35:55.490479 containerd[1459]: time="2026-03-06T01:35:55.490380352Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.116904876s" Mar 6 01:35:55.663634 kubelet[2159]: E0306 01:35:55.663423 2159 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.76:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.76:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 6 01:35:56.089007 containerd[1459]: time="2026-03-06T01:35:56.043522635Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:35:56.089007 containerd[1459]: time="2026-03-06T01:35:56.043731047Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:35:56.089007 containerd[1459]: time="2026-03-06T01:35:56.043757978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:35:56.089007 containerd[1459]: time="2026-03-06T01:35:56.043883107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:35:56.089007 containerd[1459]: time="2026-03-06T01:35:56.075021061Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:35:56.089007 containerd[1459]: time="2026-03-06T01:35:56.081119810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:35:56.089007 containerd[1459]: time="2026-03-06T01:35:56.081180491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:35:56.089007 containerd[1459]: time="2026-03-06T01:35:56.081733839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:35:56.194066 containerd[1459]: time="2026-03-06T01:35:56.192423056Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:35:56.194066 containerd[1459]: time="2026-03-06T01:35:56.192668197Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:35:56.194066 containerd[1459]: time="2026-03-06T01:35:56.192825686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:35:56.194066 containerd[1459]: time="2026-03-06T01:35:56.193306583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:35:56.223269 systemd[1]: Started cri-containerd-7eeb50aa670d03549b92d5e30017da742ccc649f60c3f3336a8871098e385411.scope - libcontainer container 7eeb50aa670d03549b92d5e30017da742ccc649f60c3f3336a8871098e385411. Mar 6 01:35:56.229347 systemd[1]: Started cri-containerd-f99dea3c11bdf95f9f9b0e972bbf55556b7bfa665484bf997c64ebaa1cd4b965.scope - libcontainer container f99dea3c11bdf95f9f9b0e972bbf55556b7bfa665484bf997c64ebaa1cd4b965. Mar 6 01:35:56.497984 systemd[1]: run-containerd-runc-k8s.io-81f0dc95ab6d6547dd2c2bfa159f5392bc1a1e81e806838903defd0adc2789e6-runc.vajf86.mount: Deactivated successfully. Mar 6 01:35:56.509096 systemd[1]: Started cri-containerd-81f0dc95ab6d6547dd2c2bfa159f5392bc1a1e81e806838903defd0adc2789e6.scope - libcontainer container 81f0dc95ab6d6547dd2c2bfa159f5392bc1a1e81e806838903defd0adc2789e6. Mar 6 01:35:56.624402 kubelet[2159]: E0306 01:35:56.624315 2159 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.76:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.76:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 6 01:35:56.627989 containerd[1459]: time="2026-03-06T01:35:56.627767018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f85463a8a572922da957afcb63656a82,Namespace:kube-system,Attempt:0,} returns sandbox id \"7eeb50aa670d03549b92d5e30017da742ccc649f60c3f3336a8871098e385411\"" Mar 6 01:35:56.631411 kubelet[2159]: E0306 01:35:56.631100 2159 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:35:56.633048 containerd[1459]: time="2026-03-06T01:35:56.632882453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,} returns sandbox id \"f99dea3c11bdf95f9f9b0e972bbf55556b7bfa665484bf997c64ebaa1cd4b965\"" Mar 6 01:35:56.640256 kubelet[2159]: E0306 01:35:56.639178 2159 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:35:56.640256 kubelet[2159]: E0306 01:35:56.639722 2159 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.76:6443: connect: connection refused" interval="3.2s" Mar 6 01:35:56.793754 containerd[1459]: time="2026-03-06T01:35:56.793323531Z" level=info msg="CreateContainer within sandbox \"f99dea3c11bdf95f9f9b0e972bbf55556b7bfa665484bf997c64ebaa1cd4b965\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 6 01:35:56.793754 containerd[1459]: time="2026-03-06T01:35:56.793324264Z" level=info msg="CreateContainer within sandbox \"7eeb50aa670d03549b92d5e30017da742ccc649f60c3f3336a8871098e385411\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 6 01:35:56.805201 containerd[1459]: time="2026-03-06T01:35:56.805094777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,} returns sandbox id \"81f0dc95ab6d6547dd2c2bfa159f5392bc1a1e81e806838903defd0adc2789e6\"" Mar 6 01:35:56.806237 kubelet[2159]: E0306 01:35:56.806172 2159 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:35:56.814636 containerd[1459]: time="2026-03-06T01:35:56.814126175Z" level=info msg="CreateContainer within sandbox \"81f0dc95ab6d6547dd2c2bfa159f5392bc1a1e81e806838903defd0adc2789e6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 6 01:35:56.826287 containerd[1459]: time="2026-03-06T01:35:56.826203003Z" level=info msg="CreateContainer within sandbox \"f99dea3c11bdf95f9f9b0e972bbf55556b7bfa665484bf997c64ebaa1cd4b965\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"57aa7f62b78c07ae2eb055d7099116342ebb924c4eae70f9d11773513a486b00\"" Mar 6 01:35:56.827249 containerd[1459]: time="2026-03-06T01:35:56.827168288Z" level=info msg="StartContainer for \"57aa7f62b78c07ae2eb055d7099116342ebb924c4eae70f9d11773513a486b00\"" Mar 6 01:35:56.833803 containerd[1459]: time="2026-03-06T01:35:56.833688099Z" level=info msg="CreateContainer within sandbox \"7eeb50aa670d03549b92d5e30017da742ccc649f60c3f3336a8871098e385411\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8fa4089962ebf1e3649531ae41864132afb88a85bee1c622d9b08b3eadec1540\"" Mar 6 01:35:56.835741 containerd[1459]: time="2026-03-06T01:35:56.835601394Z" level=info msg="StartContainer for \"8fa4089962ebf1e3649531ae41864132afb88a85bee1c622d9b08b3eadec1540\"" Mar 6 01:35:56.860487 containerd[1459]: time="2026-03-06T01:35:56.860430780Z" level=info msg="CreateContainer within sandbox \"81f0dc95ab6d6547dd2c2bfa159f5392bc1a1e81e806838903defd0adc2789e6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d538166cb82685713ade6eb5324d31e4247bba371a96eb0eaa698319816bfcca\"" Mar 6 01:35:56.861491 containerd[1459]: time="2026-03-06T01:35:56.861411952Z" level=info msg="StartContainer for \"d538166cb82685713ade6eb5324d31e4247bba371a96eb0eaa698319816bfcca\"" Mar 6 01:35:56.941482 systemd[1]: Started cri-containerd-57aa7f62b78c07ae2eb055d7099116342ebb924c4eae70f9d11773513a486b00.scope - libcontainer container 57aa7f62b78c07ae2eb055d7099116342ebb924c4eae70f9d11773513a486b00. Mar 6 01:35:56.952698 systemd[1]: Started cri-containerd-8fa4089962ebf1e3649531ae41864132afb88a85bee1c622d9b08b3eadec1540.scope - libcontainer container 8fa4089962ebf1e3649531ae41864132afb88a85bee1c622d9b08b3eadec1540. Mar 6 01:35:56.975100 systemd[1]: Started cri-containerd-d538166cb82685713ade6eb5324d31e4247bba371a96eb0eaa698319816bfcca.scope - libcontainer container d538166cb82685713ade6eb5324d31e4247bba371a96eb0eaa698319816bfcca. Mar 6 01:35:57.158483 kubelet[2159]: I0306 01:35:57.158415 2159 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 6 01:35:57.161010 kubelet[2159]: E0306 01:35:57.159671 2159 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.76:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.76:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 6 01:35:57.161010 kubelet[2159]: E0306 01:35:57.159836 2159 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.76:6443/api/v1/nodes\": dial tcp 10.0.0.76:6443: connect: connection refused" node="localhost" Mar 6 01:35:57.167404 kubelet[2159]: E0306 01:35:57.167327 2159 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.76:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.76:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 6 01:35:57.182206 containerd[1459]: time="2026-03-06T01:35:57.182147319Z" level=info msg="StartContainer for \"8fa4089962ebf1e3649531ae41864132afb88a85bee1c622d9b08b3eadec1540\" returns successfully" Mar 6 01:35:57.182353 containerd[1459]: time="2026-03-06T01:35:57.182293158Z" level=info msg="StartContainer for \"d538166cb82685713ade6eb5324d31e4247bba371a96eb0eaa698319816bfcca\" returns successfully" Mar 6 01:35:57.197461 containerd[1459]: time="2026-03-06T01:35:57.197382562Z" level=info msg="StartContainer for \"57aa7f62b78c07ae2eb055d7099116342ebb924c4eae70f9d11773513a486b00\" returns successfully" Mar 6 01:35:58.119674 kubelet[2159]: E0306 01:35:58.119504 2159 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:35:58.121406 kubelet[2159]: E0306 01:35:58.120001 2159 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:35:58.125986 kubelet[2159]: E0306 01:35:58.123501 2159 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:35:58.125986 kubelet[2159]: E0306 01:35:58.124107 2159 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:35:58.125986 kubelet[2159]: E0306 01:35:58.124751 2159 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:35:58.125986 kubelet[2159]: E0306 01:35:58.124985 2159 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:35:59.176133 kubelet[2159]: E0306 01:35:59.174411 2159 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:35:59.176133 kubelet[2159]: E0306 01:35:59.174806 2159 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:35:59.179948 kubelet[2159]: E0306 01:35:59.177715 2159 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:35:59.179948 kubelet[2159]: E0306 01:35:59.177973 2159 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:35:59.180781 kubelet[2159]: E0306 01:35:59.180718 2159 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:35:59.181057 kubelet[2159]: E0306 01:35:59.181006 2159 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:36:00.317990 kubelet[2159]: E0306 01:36:00.317772 2159 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:36:00.320293 kubelet[2159]: E0306 01:36:00.319310 2159 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:36:00.321583 kubelet[2159]: E0306 01:36:00.321308 2159 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:36:00.321583 kubelet[2159]: E0306 01:36:00.321485 2159 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:36:00.366185 kubelet[2159]: I0306 01:36:00.366127 2159 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 6 01:36:01.323294 kubelet[2159]: E0306 01:36:01.323090 2159 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:36:01.327036 kubelet[2159]: E0306 01:36:01.325023 2159 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:36:02.217630 kubelet[2159]: E0306 01:36:02.217275 2159 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 6 01:36:02.301123 kubelet[2159]: I0306 01:36:02.300987 2159 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 6 01:36:02.301291 kubelet[2159]: E0306 01:36:02.301110 2159 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 6 01:36:02.320603 kubelet[2159]: E0306 01:36:02.320515 2159 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 01:36:02.423873 kubelet[2159]: E0306 01:36:02.423025 2159 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 01:36:02.530663 kubelet[2159]: E0306 01:36:02.525542 2159 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 01:36:02.627762 kubelet[2159]: E0306 01:36:02.626958 2159 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 01:36:02.728876 kubelet[2159]: E0306 01:36:02.728717 2159 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 01:36:02.829861 kubelet[2159]: E0306 01:36:02.829616 2159 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 01:36:02.947048 kubelet[2159]: E0306 01:36:02.945856 2159 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 01:36:02.961108 kubelet[2159]: E0306 01:36:02.960465 2159 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:36:02.968728 kubelet[2159]: E0306 01:36:02.961380 2159 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:36:03.203186 kubelet[2159]: E0306 01:36:03.166177 2159 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 01:36:03.324527 kubelet[2159]: E0306 01:36:03.324067 2159 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 01:36:03.324527 kubelet[2159]: I0306 01:36:03.324345 2159 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 6 01:36:03.369029 kubelet[2159]: I0306 01:36:03.368982 2159 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 6 01:36:03.391515 kubelet[2159]: I0306 01:36:03.391354 2159 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 6 01:36:03.985219 kubelet[2159]: I0306 01:36:03.985020 2159 apiserver.go:52] "Watching apiserver" Mar 6 01:36:03.989646 kubelet[2159]: E0306 01:36:03.989322 2159 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:36:03.989646 kubelet[2159]: E0306 01:36:03.989538 2159 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:36:04.004268 kubelet[2159]: E0306 01:36:04.004202 2159 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:36:04.025456 kubelet[2159]: I0306 01:36:04.025389 2159 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 6 01:36:05.375504 systemd[1]: Reloading requested from client PID 2465 ('systemctl') (unit session-9.scope)... Mar 6 01:36:05.375548 systemd[1]: Reloading... Mar 6 01:36:05.696765 zram_generator::config[2504]: No configuration found. Mar 6 01:36:05.960755 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 6 01:36:06.135506 systemd[1]: Reloading finished in 759 ms. Mar 6 01:36:06.203798 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:36:06.221315 systemd[1]: kubelet.service: Deactivated successfully. Mar 6 01:36:06.221819 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:36:06.221986 systemd[1]: kubelet.service: Consumed 6.730s CPU time, 138.9M memory peak, 0B memory swap peak. Mar 6 01:36:06.234345 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:36:06.636332 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:36:06.686282 (kubelet)[2549]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 6 01:36:06.845674 kubelet[2549]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 6 01:36:06.845674 kubelet[2549]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 6 01:36:06.845674 kubelet[2549]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 6 01:36:06.845674 kubelet[2549]: I0306 01:36:06.845226 2549 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 6 01:36:06.870231 kubelet[2549]: I0306 01:36:06.868740 2549 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 6 01:36:06.870231 kubelet[2549]: I0306 01:36:06.868945 2549 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 6 01:36:06.871487 kubelet[2549]: I0306 01:36:06.871468 2549 server.go:956] "Client rotation is on, will bootstrap in background" Mar 6 01:36:06.873465 kubelet[2549]: I0306 01:36:06.873446 2549 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 6 01:36:06.878152 kubelet[2549]: I0306 01:36:06.878125 2549 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 6 01:36:06.886109 kubelet[2549]: E0306 01:36:06.886045 2549 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 6 01:36:06.886109 kubelet[2549]: I0306 01:36:06.886076 2549 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 6 01:36:06.904082 kubelet[2549]: I0306 01:36:06.903059 2549 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 6 01:36:06.904082 kubelet[2549]: I0306 01:36:06.903461 2549 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 6 01:36:06.904082 kubelet[2549]: I0306 01:36:06.903499 2549 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 6 01:36:06.904082 kubelet[2549]: I0306 01:36:06.903762 2549 topology_manager.go:138] "Creating topology manager with none policy" Mar 6 01:36:06.904473 kubelet[2549]: I0306 01:36:06.903774 2549 container_manager_linux.go:303] "Creating device plugin manager" Mar 6 01:36:06.904473 kubelet[2549]: I0306 01:36:06.903825 2549 state_mem.go:36] "Initialized new in-memory state store" Mar 6 01:36:06.904473 kubelet[2549]: I0306 01:36:06.904119 2549 kubelet.go:480] "Attempting to sync node with API server" Mar 6 01:36:06.904473 kubelet[2549]: I0306 01:36:06.904134 2549 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 6 01:36:06.904473 kubelet[2549]: I0306 01:36:06.904161 2549 kubelet.go:386] "Adding apiserver pod source" Mar 6 01:36:06.904473 kubelet[2549]: I0306 01:36:06.904177 2549 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 6 01:36:06.905835 kubelet[2549]: I0306 01:36:06.905811 2549 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 6 01:36:06.910945 kubelet[2549]: I0306 01:36:06.908521 2549 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 6 01:36:06.920575 kubelet[2549]: I0306 01:36:06.920489 2549 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 6 01:36:06.920575 kubelet[2549]: I0306 01:36:06.920563 2549 server.go:1289] "Started kubelet" Mar 6 01:36:06.922545 kubelet[2549]: I0306 01:36:06.921055 2549 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 6 01:36:06.923526 kubelet[2549]: I0306 01:36:06.923489 2549 server.go:317] "Adding debug handlers to kubelet server" Mar 6 01:36:06.924266 kubelet[2549]: I0306 01:36:06.924031 2549 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 6 01:36:06.924653 kubelet[2549]: I0306 01:36:06.924538 2549 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 6 01:36:06.926997 kubelet[2549]: I0306 01:36:06.926283 2549 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 6 01:36:06.928796 kubelet[2549]: E0306 01:36:06.928680 2549 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 6 01:36:06.931185 kubelet[2549]: I0306 01:36:06.931141 2549 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 6 01:36:06.931970 kubelet[2549]: I0306 01:36:06.931828 2549 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 6 01:36:06.932103 kubelet[2549]: I0306 01:36:06.932028 2549 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 6 01:36:06.932528 kubelet[2549]: I0306 01:36:06.932401 2549 reconciler.go:26] "Reconciler: start to sync state" Mar 6 01:36:06.935149 kubelet[2549]: I0306 01:36:06.934470 2549 factory.go:223] Registration of the systemd container factory successfully Mar 6 01:36:06.935149 kubelet[2549]: I0306 01:36:06.934582 2549 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 6 01:36:06.938640 kubelet[2549]: I0306 01:36:06.938565 2549 factory.go:223] Registration of the containerd container factory successfully Mar 6 01:36:06.985627 kubelet[2549]: I0306 01:36:06.985198 2549 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 6 01:36:06.990619 kubelet[2549]: I0306 01:36:06.990076 2549 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 6 01:36:06.990619 kubelet[2549]: I0306 01:36:06.990206 2549 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 6 01:36:06.990619 kubelet[2549]: I0306 01:36:06.990331 2549 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 6 01:36:06.990619 kubelet[2549]: I0306 01:36:06.990347 2549 kubelet.go:2436] "Starting kubelet main sync loop" Mar 6 01:36:06.990619 kubelet[2549]: E0306 01:36:06.990471 2549 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 6 01:36:07.091516 kubelet[2549]: E0306 01:36:07.091188 2549 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 6 01:36:07.109177 kubelet[2549]: I0306 01:36:07.108174 2549 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 6 01:36:07.109177 kubelet[2549]: I0306 01:36:07.108193 2549 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 6 01:36:07.109177 kubelet[2549]: I0306 01:36:07.108213 2549 state_mem.go:36] "Initialized new in-memory state store" Mar 6 01:36:07.109177 kubelet[2549]: I0306 01:36:07.108376 2549 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 6 01:36:07.109177 kubelet[2549]: I0306 01:36:07.108389 2549 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 6 01:36:07.109177 kubelet[2549]: I0306 01:36:07.108406 2549 policy_none.go:49] "None policy: Start" Mar 6 01:36:07.109177 kubelet[2549]: I0306 01:36:07.108415 2549 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 6 01:36:07.109177 kubelet[2549]: I0306 01:36:07.108427 2549 state_mem.go:35] "Initializing new in-memory state store" Mar 6 01:36:07.109177 kubelet[2549]: I0306 01:36:07.108509 2549 state_mem.go:75] "Updated machine memory state" Mar 6 01:36:07.116660 kubelet[2549]: E0306 01:36:07.115802 2549 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 6 01:36:07.116660 kubelet[2549]: I0306 01:36:07.116124 2549 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 6 01:36:07.116660 kubelet[2549]: I0306 01:36:07.116138 2549 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 6 01:36:07.117014 kubelet[2549]: I0306 01:36:07.116979 2549 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 6 01:36:07.123283 kubelet[2549]: E0306 01:36:07.123264 2549 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 6 01:36:07.233128 kubelet[2549]: I0306 01:36:07.231339 2549 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 6 01:36:07.245081 kubelet[2549]: I0306 01:36:07.244845 2549 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 6 01:36:07.246048 kubelet[2549]: I0306 01:36:07.245105 2549 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 6 01:36:07.300253 kubelet[2549]: I0306 01:36:07.296459 2549 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 6 01:36:07.300253 kubelet[2549]: I0306 01:36:07.296442 2549 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 6 01:36:07.300253 kubelet[2549]: I0306 01:36:07.297424 2549 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 6 01:36:07.330647 kubelet[2549]: E0306 01:36:07.326959 2549 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 6 01:36:07.332248 kubelet[2549]: E0306 01:36:07.327011 2549 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 6 01:36:07.332248 kubelet[2549]: E0306 01:36:07.326967 2549 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 6 01:36:07.336399 kubelet[2549]: I0306 01:36:07.335317 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f85463a8a572922da957afcb63656a82-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f85463a8a572922da957afcb63656a82\") " pod="kube-system/kube-apiserver-localhost" Mar 6 01:36:07.336399 kubelet[2549]: I0306 01:36:07.335437 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:36:07.336399 kubelet[2549]: I0306 01:36:07.335466 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:36:07.336399 kubelet[2549]: I0306 01:36:07.335499 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:36:07.336399 kubelet[2549]: I0306 01:36:07.335520 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:36:07.336565 kubelet[2549]: I0306 01:36:07.335551 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f85463a8a572922da957afcb63656a82-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f85463a8a572922da957afcb63656a82\") " pod="kube-system/kube-apiserver-localhost" Mar 6 01:36:07.336565 kubelet[2549]: I0306 01:36:07.335650 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:36:07.336565 kubelet[2549]: I0306 01:36:07.335731 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 6 01:36:07.336565 kubelet[2549]: I0306 01:36:07.335770 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f85463a8a572922da957afcb63656a82-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f85463a8a572922da957afcb63656a82\") " pod="kube-system/kube-apiserver-localhost" Mar 6 01:36:07.635531 kubelet[2549]: E0306 01:36:07.633186 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:36:07.635531 kubelet[2549]: E0306 01:36:07.635603 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:36:07.636335 kubelet[2549]: E0306 01:36:07.635735 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:36:07.906167 kubelet[2549]: I0306 01:36:07.906089 2549 apiserver.go:52] "Watching apiserver" Mar 6 01:36:07.932490 kubelet[2549]: I0306 01:36:07.932356 2549 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 6 01:36:08.175242 kubelet[2549]: E0306 01:36:08.174528 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:36:08.181972 kubelet[2549]: I0306 01:36:08.174533 2549 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 6 01:36:08.181972 kubelet[2549]: I0306 01:36:08.179355 2549 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 6 01:36:08.195118 kubelet[2549]: E0306 01:36:08.195042 2549 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 6 01:36:08.195803 kubelet[2549]: E0306 01:36:08.195298 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:36:08.195803 kubelet[2549]: E0306 01:36:08.195625 2549 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 6 01:36:08.196432 kubelet[2549]: E0306 01:36:08.195817 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:36:08.233840 kubelet[2549]: I0306 01:36:08.233757 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=5.233740509 podStartE2EDuration="5.233740509s" podCreationTimestamp="2026-03-06 01:36:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 01:36:08.220937452 +0000 UTC m=+1.441882738" watchObservedRunningTime="2026-03-06 01:36:08.233740509 +0000 UTC m=+1.454685785" Mar 6 01:36:08.251085 kubelet[2549]: I0306 01:36:08.250851 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=5.250827002 podStartE2EDuration="5.250827002s" podCreationTimestamp="2026-03-06 01:36:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 01:36:08.234090998 +0000 UTC m=+1.455036273" watchObservedRunningTime="2026-03-06 01:36:08.250827002 +0000 UTC m=+1.471772288" Mar 6 01:36:08.261460 kubelet[2549]: I0306 01:36:08.261399 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=5.261379796 podStartE2EDuration="5.261379796s" podCreationTimestamp="2026-03-06 01:36:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 01:36:08.251285611 +0000 UTC m=+1.472230897" watchObservedRunningTime="2026-03-06 01:36:08.261379796 +0000 UTC m=+1.482325082" Mar 6 01:36:09.415037 kubelet[2549]: E0306 01:36:09.414809 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:36:09.416335 kubelet[2549]: E0306 01:36:09.415579 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:36:10.363533 kubelet[2549]: I0306 01:36:10.363289 2549 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 6 01:36:10.367207 containerd[1459]: time="2026-03-06T01:36:10.364532153Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 6 01:36:10.368070 kubelet[2549]: I0306 01:36:10.365549 2549 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 6 01:36:10.417277 kubelet[2549]: E0306 01:36:10.417117 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:36:10.613934 systemd[1]: Created slice kubepods-besteffort-podaffc8131_0900_4532_b4e7_22b64042af0d.slice - libcontainer container kubepods-besteffort-podaffc8131_0900_4532_b4e7_22b64042af0d.slice. Mar 6 01:36:10.697596 kubelet[2549]: I0306 01:36:10.697548 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/affc8131-0900-4532-b4e7-22b64042af0d-xtables-lock\") pod \"kube-proxy-vz9kv\" (UID: \"affc8131-0900-4532-b4e7-22b64042af0d\") " pod="kube-system/kube-proxy-vz9kv" Mar 6 01:36:10.697596 kubelet[2549]: I0306 01:36:10.697596 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/affc8131-0900-4532-b4e7-22b64042af0d-lib-modules\") pod \"kube-proxy-vz9kv\" (UID: \"affc8131-0900-4532-b4e7-22b64042af0d\") " pod="kube-system/kube-proxy-vz9kv" Mar 6 01:36:10.697596 kubelet[2549]: I0306 01:36:10.697620 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/affc8131-0900-4532-b4e7-22b64042af0d-kube-proxy\") pod \"kube-proxy-vz9kv\" (UID: \"affc8131-0900-4532-b4e7-22b64042af0d\") " pod="kube-system/kube-proxy-vz9kv" Mar 6 01:36:10.697596 kubelet[2549]: I0306 01:36:10.697666 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbmds\" (UniqueName: \"kubernetes.io/projected/affc8131-0900-4532-b4e7-22b64042af0d-kube-api-access-vbmds\") pod \"kube-proxy-vz9kv\" (UID: \"affc8131-0900-4532-b4e7-22b64042af0d\") " pod="kube-system/kube-proxy-vz9kv" Mar 6 01:36:10.968186 kubelet[2549]: E0306 01:36:10.968089 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:36:10.969271 containerd[1459]: time="2026-03-06T01:36:10.969094113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vz9kv,Uid:affc8131-0900-4532-b4e7-22b64042af0d,Namespace:kube-system,Attempt:0,}" Mar 6 01:36:11.025665 systemd[1]: Created slice kubepods-besteffort-pod1117b792_121d_4ee5_8ec3_99f25612b261.slice - libcontainer container kubepods-besteffort-pod1117b792_121d_4ee5_8ec3_99f25612b261.slice. Mar 6 01:36:11.029043 containerd[1459]: time="2026-03-06T01:36:11.028064858Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:36:11.029043 containerd[1459]: time="2026-03-06T01:36:11.028360626Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:36:11.029043 containerd[1459]: time="2026-03-06T01:36:11.028378079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:36:11.029043 containerd[1459]: time="2026-03-06T01:36:11.028539727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:36:11.074152 systemd[1]: Started cri-containerd-1e3aeb507163c9e773ef13262640cc91c81a33616d9d90ee78bf825ec1d5ca41.scope - libcontainer container 1e3aeb507163c9e773ef13262640cc91c81a33616d9d90ee78bf825ec1d5ca41. Mar 6 01:36:11.100848 kubelet[2549]: I0306 01:36:11.100741 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hfsf\" (UniqueName: \"kubernetes.io/projected/1117b792-121d-4ee5-8ec3-99f25612b261-kube-api-access-9hfsf\") pod \"tigera-operator-6bf85f8dd-fr8qr\" (UID: \"1117b792-121d-4ee5-8ec3-99f25612b261\") " pod="tigera-operator/tigera-operator-6bf85f8dd-fr8qr" Mar 6 01:36:11.100848 kubelet[2549]: I0306 01:36:11.100780 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1117b792-121d-4ee5-8ec3-99f25612b261-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-fr8qr\" (UID: \"1117b792-121d-4ee5-8ec3-99f25612b261\") " pod="tigera-operator/tigera-operator-6bf85f8dd-fr8qr" Mar 6 01:36:11.113124 containerd[1459]: time="2026-03-06T01:36:11.113041802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vz9kv,Uid:affc8131-0900-4532-b4e7-22b64042af0d,Namespace:kube-system,Attempt:0,} returns sandbox id \"1e3aeb507163c9e773ef13262640cc91c81a33616d9d90ee78bf825ec1d5ca41\"" Mar 6 01:36:11.114747 kubelet[2549]: E0306 01:36:11.114714 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:36:11.123935 containerd[1459]: time="2026-03-06T01:36:11.123135051Z" level=info msg="CreateContainer within sandbox \"1e3aeb507163c9e773ef13262640cc91c81a33616d9d90ee78bf825ec1d5ca41\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 6 01:36:11.149092 containerd[1459]: time="2026-03-06T01:36:11.149015333Z" level=info msg="CreateContainer within sandbox \"1e3aeb507163c9e773ef13262640cc91c81a33616d9d90ee78bf825ec1d5ca41\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"23525cdebe41e44c5555d72d87cc8fb8f4407175024ab7971b47f3d044136380\"" Mar 6 01:36:11.150103 containerd[1459]: time="2026-03-06T01:36:11.150072691Z" level=info msg="StartContainer for \"23525cdebe41e44c5555d72d87cc8fb8f4407175024ab7971b47f3d044136380\"" Mar 6 01:36:11.211250 systemd[1]: Started cri-containerd-23525cdebe41e44c5555d72d87cc8fb8f4407175024ab7971b47f3d044136380.scope - libcontainer container 23525cdebe41e44c5555d72d87cc8fb8f4407175024ab7971b47f3d044136380. Mar 6 01:36:11.333553 containerd[1459]: time="2026-03-06T01:36:11.333228719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-fr8qr,Uid:1117b792-121d-4ee5-8ec3-99f25612b261,Namespace:tigera-operator,Attempt:0,}" Mar 6 01:36:11.341405 containerd[1459]: time="2026-03-06T01:36:11.340547027Z" level=info msg="StartContainer for \"23525cdebe41e44c5555d72d87cc8fb8f4407175024ab7971b47f3d044136380\" returns successfully" Mar 6 01:36:11.385740 containerd[1459]: time="2026-03-06T01:36:11.385113155Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:36:11.385740 containerd[1459]: time="2026-03-06T01:36:11.385197352Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:36:11.385740 containerd[1459]: time="2026-03-06T01:36:11.385222508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:36:11.385740 containerd[1459]: time="2026-03-06T01:36:11.385418030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:36:11.417202 systemd[1]: Started cri-containerd-a83bb641b456cf4711dacb09fa99279dd4fb1b209d11a648ea4586defd81c92c.scope - libcontainer container a83bb641b456cf4711dacb09fa99279dd4fb1b209d11a648ea4586defd81c92c. Mar 6 01:36:11.428980 kubelet[2549]: E0306 01:36:11.425155 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:36:11.428980 kubelet[2549]: E0306 01:36:11.425876 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:36:11.447990 kubelet[2549]: I0306 01:36:11.446486 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vz9kv" podStartSLOduration=1.446461621 podStartE2EDuration="1.446461621s" podCreationTimestamp="2026-03-06 01:36:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 01:36:11.442834769 +0000 UTC m=+4.663780046" watchObservedRunningTime="2026-03-06 01:36:11.446461621 +0000 UTC m=+4.667406898" Mar 6 01:36:11.507131 containerd[1459]: time="2026-03-06T01:36:11.507003046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-fr8qr,Uid:1117b792-121d-4ee5-8ec3-99f25612b261,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a83bb641b456cf4711dacb09fa99279dd4fb1b209d11a648ea4586defd81c92c\"" Mar 6 01:36:11.511822 containerd[1459]: time="2026-03-06T01:36:11.511785569Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 6 01:36:11.875611 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount517280193.mount: Deactivated successfully. Mar 6 01:36:13.142528 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3490231223.mount: Deactivated successfully. Mar 6 01:36:15.181338 kubelet[2549]: E0306 01:36:15.177493 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:36:15.429503 kubelet[2549]: E0306 01:36:15.429426 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:36:15.552206 kubelet[2549]: E0306 01:36:15.551998 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:36:15.552742 kubelet[2549]: E0306 01:36:15.552661 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:36:16.516748 containerd[1459]: time="2026-03-06T01:36:16.516644776Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:36:16.517787 containerd[1459]: time="2026-03-06T01:36:16.517726521Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 6 01:36:16.519474 containerd[1459]: time="2026-03-06T01:36:16.519411032Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:36:16.522584 containerd[1459]: time="2026-03-06T01:36:16.522483127Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:36:16.523443 containerd[1459]: time="2026-03-06T01:36:16.523392335Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 5.011569286s" Mar 6 01:36:16.523443 containerd[1459]: time="2026-03-06T01:36:16.523438752Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 6 01:36:16.530664 containerd[1459]: time="2026-03-06T01:36:16.530609454Z" level=info msg="CreateContainer within sandbox \"a83bb641b456cf4711dacb09fa99279dd4fb1b209d11a648ea4586defd81c92c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 6 01:36:16.547189 containerd[1459]: time="2026-03-06T01:36:16.547096863Z" level=info msg="CreateContainer within sandbox \"a83bb641b456cf4711dacb09fa99279dd4fb1b209d11a648ea4586defd81c92c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d875f804456b7b40b673b49dff3b65e88d528cf3be78ecc41e04711e4e0928c1\"" Mar 6 01:36:16.548842 containerd[1459]: time="2026-03-06T01:36:16.548357836Z" level=info msg="StartContainer for \"d875f804456b7b40b673b49dff3b65e88d528cf3be78ecc41e04711e4e0928c1\"" Mar 6 01:36:16.603113 systemd[1]: Started cri-containerd-d875f804456b7b40b673b49dff3b65e88d528cf3be78ecc41e04711e4e0928c1.scope - libcontainer container d875f804456b7b40b673b49dff3b65e88d528cf3be78ecc41e04711e4e0928c1. Mar 6 01:36:16.648071 containerd[1459]: time="2026-03-06T01:36:16.647669388Z" level=info msg="StartContainer for \"d875f804456b7b40b673b49dff3b65e88d528cf3be78ecc41e04711e4e0928c1\" returns successfully" Mar 6 01:36:24.284861 sudo[1647]: pam_unix(sudo:session): session closed for user root Mar 6 01:36:24.292733 sshd[1644]: pam_unix(sshd:session): session closed for user core Mar 6 01:36:24.299945 systemd-logind[1436]: Session 9 logged out. Waiting for processes to exit. Mar 6 01:36:24.303120 systemd[1]: sshd@8-10.0.0.76:22-10.0.0.1:42680.service: Deactivated successfully. Mar 6 01:36:24.306982 systemd[1]: session-9.scope: Deactivated successfully. Mar 6 01:36:24.307289 systemd[1]: session-9.scope: Consumed 12.270s CPU time, 161.8M memory peak, 0B memory swap peak. Mar 6 01:36:24.310140 systemd-logind[1436]: Removed session 9. Mar 6 01:36:26.668389 kubelet[2549]: I0306 01:36:26.668252 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-fr8qr" podStartSLOduration=11.652511777 podStartE2EDuration="16.668229696s" podCreationTimestamp="2026-03-06 01:36:10 +0000 UTC" firstStartedPulling="2026-03-06 01:36:11.508881051 +0000 UTC m=+4.729826327" lastFinishedPulling="2026-03-06 01:36:16.52459896 +0000 UTC m=+9.745544246" observedRunningTime="2026-03-06 01:36:17.573404635 +0000 UTC m=+10.794349911" watchObservedRunningTime="2026-03-06 01:36:26.668229696 +0000 UTC m=+19.889174982" Mar 6 01:36:26.765034 systemd[1]: Created slice kubepods-besteffort-pod0d8a9200_7e04_44fa_b359_801e2a49da96.slice - libcontainer container kubepods-besteffort-pod0d8a9200_7e04_44fa_b359_801e2a49da96.slice. Mar 6 01:36:26.802353 systemd[1]: Created slice kubepods-besteffort-pod8af5e69f_c42b_4c78_834f_85e6aef485fe.slice - libcontainer container kubepods-besteffort-pod8af5e69f_c42b_4c78_834f_85e6aef485fe.slice. Mar 6 01:36:26.877756 kubelet[2549]: I0306 01:36:26.877712 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx24d\" (UniqueName: \"kubernetes.io/projected/0d8a9200-7e04-44fa-b359-801e2a49da96-kube-api-access-dx24d\") pod \"calico-typha-5957fb54ff-d7d69\" (UID: \"0d8a9200-7e04-44fa-b359-801e2a49da96\") " pod="calico-system/calico-typha-5957fb54ff-d7d69" Mar 6 01:36:26.877756 kubelet[2549]: I0306 01:36:26.877758 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8af5e69f-c42b-4c78-834f-85e6aef485fe-lib-modules\") pod \"calico-node-9ll8t\" (UID: \"8af5e69f-c42b-4c78-834f-85e6aef485fe\") " pod="calico-system/calico-node-9ll8t" Mar 6 01:36:26.877989 kubelet[2549]: I0306 01:36:26.877775 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8af5e69f-c42b-4c78-834f-85e6aef485fe-tigera-ca-bundle\") pod \"calico-node-9ll8t\" (UID: \"8af5e69f-c42b-4c78-834f-85e6aef485fe\") " pod="calico-system/calico-node-9ll8t" Mar 6 01:36:26.877989 kubelet[2549]: I0306 01:36:26.877791 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8af5e69f-c42b-4c78-834f-85e6aef485fe-node-certs\") pod \"calico-node-9ll8t\" (UID: \"8af5e69f-c42b-4c78-834f-85e6aef485fe\") " pod="calico-system/calico-node-9ll8t" Mar 6 01:36:26.877989 kubelet[2549]: I0306 01:36:26.877806 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/8af5e69f-c42b-4c78-834f-85e6aef485fe-sys-fs\") pod \"calico-node-9ll8t\" (UID: \"8af5e69f-c42b-4c78-834f-85e6aef485fe\") " pod="calico-system/calico-node-9ll8t" Mar 6 01:36:26.877989 kubelet[2549]: I0306 01:36:26.877820 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8af5e69f-c42b-4c78-834f-85e6aef485fe-cni-bin-dir\") pod \"calico-node-9ll8t\" (UID: \"8af5e69f-c42b-4c78-834f-85e6aef485fe\") " pod="calico-system/calico-node-9ll8t" Mar 6 01:36:26.877989 kubelet[2549]: I0306 01:36:26.877851 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8af5e69f-c42b-4c78-834f-85e6aef485fe-cni-log-dir\") pod \"calico-node-9ll8t\" (UID: \"8af5e69f-c42b-4c78-834f-85e6aef485fe\") " pod="calico-system/calico-node-9ll8t" Mar 6 01:36:26.878160 kubelet[2549]: I0306 01:36:26.877869 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8af5e69f-c42b-4c78-834f-85e6aef485fe-cni-net-dir\") pod \"calico-node-9ll8t\" (UID: \"8af5e69f-c42b-4c78-834f-85e6aef485fe\") " pod="calico-system/calico-node-9ll8t" Mar 6 01:36:26.878160 kubelet[2549]: I0306 01:36:26.877883 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8af5e69f-c42b-4c78-834f-85e6aef485fe-var-run-calico\") pod \"calico-node-9ll8t\" (UID: \"8af5e69f-c42b-4c78-834f-85e6aef485fe\") " pod="calico-system/calico-node-9ll8t" Mar 6 01:36:26.878160 kubelet[2549]: I0306 01:36:26.877958 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/8af5e69f-c42b-4c78-834f-85e6aef485fe-bpffs\") pod \"calico-node-9ll8t\" (UID: \"8af5e69f-c42b-4c78-834f-85e6aef485fe\") " pod="calico-system/calico-node-9ll8t" Mar 6 01:36:26.878160 kubelet[2549]: I0306 01:36:26.877974 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8af5e69f-c42b-4c78-834f-85e6aef485fe-flexvol-driver-host\") pod \"calico-node-9ll8t\" (UID: \"8af5e69f-c42b-4c78-834f-85e6aef485fe\") " pod="calico-system/calico-node-9ll8t" Mar 6 01:36:26.878160 kubelet[2549]: I0306 01:36:26.878024 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8af5e69f-c42b-4c78-834f-85e6aef485fe-var-lib-calico\") pod \"calico-node-9ll8t\" (UID: \"8af5e69f-c42b-4c78-834f-85e6aef485fe\") " pod="calico-system/calico-node-9ll8t" Mar 6 01:36:26.878268 kubelet[2549]: I0306 01:36:26.878180 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/8af5e69f-c42b-4c78-834f-85e6aef485fe-nodeproc\") pod \"calico-node-9ll8t\" (UID: \"8af5e69f-c42b-4c78-834f-85e6aef485fe\") " pod="calico-system/calico-node-9ll8t" Mar 6 01:36:26.878268 kubelet[2549]: I0306 01:36:26.878210 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8af5e69f-c42b-4c78-834f-85e6aef485fe-xtables-lock\") pod \"calico-node-9ll8t\" (UID: \"8af5e69f-c42b-4c78-834f-85e6aef485fe\") " pod="calico-system/calico-node-9ll8t" Mar 6 01:36:26.878268 kubelet[2549]: I0306 01:36:26.878241 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d8a9200-7e04-44fa-b359-801e2a49da96-tigera-ca-bundle\") pod \"calico-typha-5957fb54ff-d7d69\" (UID: \"0d8a9200-7e04-44fa-b359-801e2a49da96\") " pod="calico-system/calico-typha-5957fb54ff-d7d69" Mar 6 01:36:26.878987 kubelet[2549]: I0306 01:36:26.878688 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0d8a9200-7e04-44fa-b359-801e2a49da96-typha-certs\") pod \"calico-typha-5957fb54ff-d7d69\" (UID: \"0d8a9200-7e04-44fa-b359-801e2a49da96\") " pod="calico-system/calico-typha-5957fb54ff-d7d69" Mar 6 01:36:26.879182 kubelet[2549]: I0306 01:36:26.879131 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8af5e69f-c42b-4c78-834f-85e6aef485fe-policysync\") pod \"calico-node-9ll8t\" (UID: \"8af5e69f-c42b-4c78-834f-85e6aef485fe\") " pod="calico-system/calico-node-9ll8t" Mar 6 01:36:26.879226 kubelet[2549]: I0306 01:36:26.879200 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbtct\" (UniqueName: \"kubernetes.io/projected/8af5e69f-c42b-4c78-834f-85e6aef485fe-kube-api-access-nbtct\") pod \"calico-node-9ll8t\" (UID: \"8af5e69f-c42b-4c78-834f-85e6aef485fe\") " pod="calico-system/calico-node-9ll8t" Mar 6 01:36:26.916261 kubelet[2549]: E0306 01:36:26.916144 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kfs88" podUID="885266c2-0ca6-482d-827c-cc1c88e284cf" Mar 6 01:36:26.983082 kubelet[2549]: I0306 01:36:26.980652 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz62j\" (UniqueName: \"kubernetes.io/projected/885266c2-0ca6-482d-827c-cc1c88e284cf-kube-api-access-rz62j\") pod \"csi-node-driver-kfs88\" (UID: \"885266c2-0ca6-482d-827c-cc1c88e284cf\") " pod="calico-system/csi-node-driver-kfs88" Mar 6 01:36:26.983082 kubelet[2549]: I0306 01:36:26.981095 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/885266c2-0ca6-482d-827c-cc1c88e284cf-socket-dir\") pod \"csi-node-driver-kfs88\" (UID: \"885266c2-0ca6-482d-827c-cc1c88e284cf\") " pod="calico-system/csi-node-driver-kfs88" Mar 6 01:36:26.983082 kubelet[2549]: I0306 01:36:26.981118 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/885266c2-0ca6-482d-827c-cc1c88e284cf-varrun\") pod \"csi-node-driver-kfs88\" (UID: \"885266c2-0ca6-482d-827c-cc1c88e284cf\") " pod="calico-system/csi-node-driver-kfs88" Mar 6 01:36:26.983082 kubelet[2549]: I0306 01:36:26.981211 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/885266c2-0ca6-482d-827c-cc1c88e284cf-kubelet-dir\") pod \"csi-node-driver-kfs88\" (UID: \"885266c2-0ca6-482d-827c-cc1c88e284cf\") " pod="calico-system/csi-node-driver-kfs88" Mar 6 01:36:26.983082 kubelet[2549]: I0306 01:36:26.981229 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/885266c2-0ca6-482d-827c-cc1c88e284cf-registration-dir\") pod \"csi-node-driver-kfs88\" (UID: \"885266c2-0ca6-482d-827c-cc1c88e284cf\") " pod="calico-system/csi-node-driver-kfs88" Mar 6 01:36:27.007384 kubelet[2549]: E0306 01:36:27.007109 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:27.007384 kubelet[2549]: W0306 01:36:27.007139 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:27.007384 kubelet[2549]: E0306 01:36:27.007222 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:27.008629 kubelet[2549]: E0306 01:36:27.008199 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:27.008736 kubelet[2549]: W0306 01:36:27.008715 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:27.009079 kubelet[2549]: E0306 01:36:27.008883 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:27.016542 kubelet[2549]: E0306 01:36:27.016359 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:27.016542 kubelet[2549]: W0306 01:36:27.016388 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:27.016542 kubelet[2549]: E0306 01:36:27.016409 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:27.018118 kubelet[2549]: E0306 01:36:27.017092 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:27.018118 kubelet[2549]: W0306 01:36:27.017116 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:27.018118 kubelet[2549]: E0306 01:36:27.017137 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:27.070404 kubelet[2549]: E0306 01:36:27.070346 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:36:27.072202 containerd[1459]: time="2026-03-06T01:36:27.072070071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5957fb54ff-d7d69,Uid:0d8a9200-7e04-44fa-b359-801e2a49da96,Namespace:calico-system,Attempt:0,}" Mar 6 01:36:27.082721 kubelet[2549]: E0306 01:36:27.082548 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:27.082721 kubelet[2549]: W0306 01:36:27.082592 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:27.082721 kubelet[2549]: E0306 01:36:27.082617 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:27.083343 kubelet[2549]: E0306 01:36:27.083275 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:27.083343 kubelet[2549]: W0306 01:36:27.083317 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:27.083343 kubelet[2549]: E0306 01:36:27.083335 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:27.084202 kubelet[2549]: E0306 01:36:27.084093 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:27.084202 kubelet[2549]: W0306 01:36:27.084111 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:27.084202 kubelet[2549]: E0306 01:36:27.084125 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:27.085017 kubelet[2549]: E0306 01:36:27.084957 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:27.085017 kubelet[2549]: W0306 01:36:27.084975 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:27.085017 kubelet[2549]: E0306 01:36:27.084988 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:27.086010 kubelet[2549]: E0306 01:36:27.085506 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:27.086010 kubelet[2549]: W0306 01:36:27.085525 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:27.086010 kubelet[2549]: E0306 01:36:27.085541 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:27.087036 kubelet[2549]: E0306 01:36:27.087000 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:27.087036 kubelet[2549]: W0306 01:36:27.087035 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:27.087199 kubelet[2549]: E0306 01:36:27.087049 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:27.087758 kubelet[2549]: E0306 01:36:27.087661 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:27.087758 kubelet[2549]: W0306 01:36:27.087700 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:27.087758 kubelet[2549]: E0306 01:36:27.087716 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:27.088581 kubelet[2549]: E0306 01:36:27.088431 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:27.088581 kubelet[2549]: W0306 01:36:27.088482 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:27.088581 kubelet[2549]: E0306 01:36:27.088499 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:27.091057 kubelet[2549]: E0306 01:36:27.090136 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:27.091057 kubelet[2549]: W0306 01:36:27.090155 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:27.091057 kubelet[2549]: E0306 01:36:27.090169 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:27.092330 kubelet[2549]: E0306 01:36:27.092277 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:27.092330 kubelet[2549]: W0306 01:36:27.092291 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:27.092330 kubelet[2549]: E0306 01:36:27.092305 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:27.092802 kubelet[2549]: E0306 01:36:27.092755 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:27.092802 kubelet[2549]: W0306 01:36:27.092772 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:27.092802 kubelet[2549]: E0306 01:36:27.092785 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:27.093296 kubelet[2549]: E0306 01:36:27.093235 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:27.093296 kubelet[2549]: W0306 01:36:27.093251 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:27.093296 kubelet[2549]: E0306 01:36:27.093263 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:27.094827 kubelet[2549]: E0306 01:36:27.093709 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:27.094827 kubelet[2549]: W0306 01:36:27.093724 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:27.094827 kubelet[2549]: E0306 01:36:27.093736 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:27.094827 kubelet[2549]: E0306 01:36:27.094086 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:27.094827 kubelet[2549]: W0306 01:36:27.094098 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:27.094827 kubelet[2549]: E0306 01:36:27.094110 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:27.094827 kubelet[2549]: E0306 01:36:27.094485 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:27.094827 kubelet[2549]: W0306 01:36:27.094497 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:27.094827 kubelet[2549]: E0306 01:36:27.094511 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:27.095349 kubelet[2549]: E0306 01:36:27.094977 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:27.095349 kubelet[2549]: W0306 01:36:27.094991 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:27.095349 kubelet[2549]: E0306 01:36:27.095003 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:27.095349 kubelet[2549]: E0306 01:36:27.095308 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:27.095349 kubelet[2549]: W0306 01:36:27.095320 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:27.095349 kubelet[2549]: E0306 01:36:27.095335 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:27.095772 kubelet[2549]: E0306 01:36:27.095728 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:27.095772 kubelet[2549]: W0306 01:36:27.095765 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:27.096065 kubelet[2549]: E0306 01:36:27.095780 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:27.096431 kubelet[2549]: E0306 01:36:27.096389 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:27.096431 kubelet[2549]: W0306 01:36:27.096423 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:27.096655 kubelet[2549]: E0306 01:36:27.096441 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:27.097276 kubelet[2549]: E0306 01:36:27.097113 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:27.097276 kubelet[2549]: W0306 01:36:27.097210 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:27.097276 kubelet[2549]: E0306 01:36:27.097228 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:27.099003 kubelet[2549]: E0306 01:36:27.098962 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:27.099003 kubelet[2549]: W0306 01:36:27.098980 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:27.099003 kubelet[2549]: E0306 01:36:27.098995 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:27.099561 kubelet[2549]: E0306 01:36:27.099436 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:27.099561 kubelet[2549]: W0306 01:36:27.099507 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:27.099561 kubelet[2549]: E0306 01:36:27.099524 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:27.099991 kubelet[2549]: E0306 01:36:27.099967 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:27.099991 kubelet[2549]: W0306 01:36:27.099988 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:27.100159 kubelet[2549]: E0306 01:36:27.100003 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:27.100647 kubelet[2549]: E0306 01:36:27.100490 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:27.100647 kubelet[2549]: W0306 01:36:27.100539 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:27.100647 kubelet[2549]: E0306 01:36:27.100555 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:27.101628 kubelet[2549]: E0306 01:36:27.101552 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:27.101628 kubelet[2549]: W0306 01:36:27.101587 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:27.101628 kubelet[2549]: E0306 01:36:27.101605 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:27.108837 containerd[1459]: time="2026-03-06T01:36:27.108796956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9ll8t,Uid:8af5e69f-c42b-4c78-834f-85e6aef485fe,Namespace:calico-system,Attempt:0,}" Mar 6 01:36:27.119235 kubelet[2549]: E0306 01:36:27.119155 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:27.119235 kubelet[2549]: W0306 01:36:27.119199 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:27.119235 kubelet[2549]: E0306 01:36:27.119221 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:27.126066 containerd[1459]: time="2026-03-06T01:36:27.124782131Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:36:27.126066 containerd[1459]: time="2026-03-06T01:36:27.124979137Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:36:27.126066 containerd[1459]: time="2026-03-06T01:36:27.125002531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:36:27.126066 containerd[1459]: time="2026-03-06T01:36:27.125224231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:36:27.162880 containerd[1459]: time="2026-03-06T01:36:27.162257227Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:36:27.162880 containerd[1459]: time="2026-03-06T01:36:27.162335613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:36:27.162880 containerd[1459]: time="2026-03-06T01:36:27.162355530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:36:27.165146 containerd[1459]: time="2026-03-06T01:36:27.162745685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:36:27.169135 systemd[1]: Started cri-containerd-5299f64796ba0ca33f9af4fb88a28bfd0f6c04514afe23d6c5e7f988ddfa3a02.scope - libcontainer container 5299f64796ba0ca33f9af4fb88a28bfd0f6c04514afe23d6c5e7f988ddfa3a02. Mar 6 01:36:27.206159 systemd[1]: Started cri-containerd-f712b1d58a55c3828b2ebd530733fedc47d5942749ce9d9d58c76ad552225691.scope - libcontainer container f712b1d58a55c3828b2ebd530733fedc47d5942749ce9d9d58c76ad552225691. Mar 6 01:36:27.251083 containerd[1459]: time="2026-03-06T01:36:27.243880375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5957fb54ff-d7d69,Uid:0d8a9200-7e04-44fa-b359-801e2a49da96,Namespace:calico-system,Attempt:0,} returns sandbox id \"5299f64796ba0ca33f9af4fb88a28bfd0f6c04514afe23d6c5e7f988ddfa3a02\"" Mar 6 01:36:27.252571 kubelet[2549]: E0306 01:36:27.251517 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:36:27.252744 containerd[1459]: time="2026-03-06T01:36:27.252716484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 6 01:36:27.273560 containerd[1459]: time="2026-03-06T01:36:27.273223915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9ll8t,Uid:8af5e69f-c42b-4c78-834f-85e6aef485fe,Namespace:calico-system,Attempt:0,} returns sandbox id \"f712b1d58a55c3828b2ebd530733fedc47d5942749ce9d9d58c76ad552225691\"" Mar 6 01:36:28.330877 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3366246475.mount: Deactivated successfully. Mar 6 01:36:28.849795 containerd[1459]: time="2026-03-06T01:36:28.849704728Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:36:28.850685 containerd[1459]: time="2026-03-06T01:36:28.850618622Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Mar 6 01:36:28.852177 containerd[1459]: time="2026-03-06T01:36:28.852094746Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:36:28.855525 containerd[1459]: time="2026-03-06T01:36:28.855462635Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:36:28.856493 containerd[1459]: time="2026-03-06T01:36:28.856402479Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 1.603659486s" Mar 6 01:36:28.856493 containerd[1459]: time="2026-03-06T01:36:28.856468021Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 6 01:36:28.857542 containerd[1459]: time="2026-03-06T01:36:28.857492414Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 6 01:36:28.886481 containerd[1459]: time="2026-03-06T01:36:28.884359541Z" level=info msg="CreateContainer within sandbox \"5299f64796ba0ca33f9af4fb88a28bfd0f6c04514afe23d6c5e7f988ddfa3a02\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 6 01:36:28.991626 kubelet[2549]: E0306 01:36:28.990655 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kfs88" podUID="885266c2-0ca6-482d-827c-cc1c88e284cf" Mar 6 01:36:29.008755 containerd[1459]: time="2026-03-06T01:36:29.008660113Z" level=info msg="CreateContainer within sandbox \"5299f64796ba0ca33f9af4fb88a28bfd0f6c04514afe23d6c5e7f988ddfa3a02\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ed20b62376572a33c93cac181bdcd837d8e720e105ae7feb8a9e5626846fcaef\"" Mar 6 01:36:29.009552 containerd[1459]: time="2026-03-06T01:36:29.009345794Z" level=info msg="StartContainer for \"ed20b62376572a33c93cac181bdcd837d8e720e105ae7feb8a9e5626846fcaef\"" Mar 6 01:36:29.058148 systemd[1]: Started cri-containerd-ed20b62376572a33c93cac181bdcd837d8e720e105ae7feb8a9e5626846fcaef.scope - libcontainer container ed20b62376572a33c93cac181bdcd837d8e720e105ae7feb8a9e5626846fcaef. Mar 6 01:36:29.115530 containerd[1459]: time="2026-03-06T01:36:29.115316063Z" level=info msg="StartContainer for \"ed20b62376572a33c93cac181bdcd837d8e720e105ae7feb8a9e5626846fcaef\" returns successfully" Mar 6 01:36:29.779391 kubelet[2549]: E0306 01:36:29.779238 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:36:29.791317 kubelet[2549]: I0306 01:36:29.791223 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5957fb54ff-d7d69" podStartSLOduration=2.186126527 podStartE2EDuration="3.791206712s" podCreationTimestamp="2026-03-06 01:36:26 +0000 UTC" firstStartedPulling="2026-03-06 01:36:27.252262436 +0000 UTC m=+20.473207712" lastFinishedPulling="2026-03-06 01:36:28.857342621 +0000 UTC m=+22.078287897" observedRunningTime="2026-03-06 01:36:29.790701514 +0000 UTC m=+23.011646800" watchObservedRunningTime="2026-03-06 01:36:29.791206712 +0000 UTC m=+23.012151989" Mar 6 01:36:29.877872 kubelet[2549]: E0306 01:36:29.877770 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:29.877872 kubelet[2549]: W0306 01:36:29.877845 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:29.878139 kubelet[2549]: E0306 01:36:29.877978 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:29.878629 kubelet[2549]: E0306 01:36:29.878557 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:29.878629 kubelet[2549]: W0306 01:36:29.878597 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:29.878629 kubelet[2549]: E0306 01:36:29.878612 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:29.879149 kubelet[2549]: E0306 01:36:29.879076 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:29.879149 kubelet[2549]: W0306 01:36:29.879120 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:29.879149 kubelet[2549]: E0306 01:36:29.879137 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:29.879721 kubelet[2549]: E0306 01:36:29.879643 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:29.879721 kubelet[2549]: W0306 01:36:29.879689 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:29.879721 kubelet[2549]: E0306 01:36:29.879705 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:29.880337 kubelet[2549]: E0306 01:36:29.880245 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:29.880337 kubelet[2549]: W0306 01:36:29.880283 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:29.880337 kubelet[2549]: E0306 01:36:29.880300 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:29.880832 kubelet[2549]: E0306 01:36:29.880768 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:29.880832 kubelet[2549]: W0306 01:36:29.880815 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:29.880832 kubelet[2549]: E0306 01:36:29.880831 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:29.881509 kubelet[2549]: E0306 01:36:29.881454 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:29.881509 kubelet[2549]: W0306 01:36:29.881500 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:29.881630 kubelet[2549]: E0306 01:36:29.881526 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:29.882106 kubelet[2549]: E0306 01:36:29.882071 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:29.882156 kubelet[2549]: W0306 01:36:29.882110 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:29.882156 kubelet[2549]: E0306 01:36:29.882128 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:29.882672 kubelet[2549]: E0306 01:36:29.882634 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:29.882716 kubelet[2549]: W0306 01:36:29.882673 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:29.882716 kubelet[2549]: E0306 01:36:29.882690 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:29.883253 kubelet[2549]: E0306 01:36:29.883220 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:29.883300 kubelet[2549]: W0306 01:36:29.883257 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:29.883300 kubelet[2549]: E0306 01:36:29.883273 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:29.883766 kubelet[2549]: E0306 01:36:29.883723 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:29.883766 kubelet[2549]: W0306 01:36:29.883765 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:29.883874 kubelet[2549]: E0306 01:36:29.883781 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:29.884403 kubelet[2549]: E0306 01:36:29.884357 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:29.884403 kubelet[2549]: W0306 01:36:29.884395 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:29.884536 kubelet[2549]: E0306 01:36:29.884410 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:29.885141 kubelet[2549]: E0306 01:36:29.885074 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:29.885141 kubelet[2549]: W0306 01:36:29.885118 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:29.885141 kubelet[2549]: E0306 01:36:29.885135 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:29.885632 kubelet[2549]: E0306 01:36:29.885565 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:29.885632 kubelet[2549]: W0306 01:36:29.885608 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:29.885632 kubelet[2549]: E0306 01:36:29.885625 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:29.886201 kubelet[2549]: E0306 01:36:29.886156 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:29.886201 kubelet[2549]: W0306 01:36:29.886192 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:29.886312 kubelet[2549]: E0306 01:36:29.886207 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:29.908644 kubelet[2549]: E0306 01:36:29.908533 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:29.908644 kubelet[2549]: W0306 01:36:29.908579 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:29.908644 kubelet[2549]: E0306 01:36:29.908605 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:29.909227 kubelet[2549]: E0306 01:36:29.909179 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:29.909227 kubelet[2549]: W0306 01:36:29.909223 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:29.909227 kubelet[2549]: E0306 01:36:29.909252 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:29.910393 kubelet[2549]: E0306 01:36:29.910352 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:29.910393 kubelet[2549]: W0306 01:36:29.910390 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:29.910542 kubelet[2549]: E0306 01:36:29.910410 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:29.911020 kubelet[2549]: E0306 01:36:29.910980 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:29.911020 kubelet[2549]: W0306 01:36:29.911018 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:29.911138 kubelet[2549]: E0306 01:36:29.911036 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:29.911528 kubelet[2549]: E0306 01:36:29.911456 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:29.911528 kubelet[2549]: W0306 01:36:29.911498 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:29.911528 kubelet[2549]: E0306 01:36:29.911514 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:29.912002 kubelet[2549]: E0306 01:36:29.911966 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:29.912002 kubelet[2549]: W0306 01:36:29.911994 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:29.912115 kubelet[2549]: E0306 01:36:29.912011 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:29.912463 kubelet[2549]: E0306 01:36:29.912389 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:29.912590 kubelet[2549]: W0306 01:36:29.912547 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:29.912647 kubelet[2549]: E0306 01:36:29.912607 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:29.913398 kubelet[2549]: E0306 01:36:29.913342 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:29.913398 kubelet[2549]: W0306 01:36:29.913375 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:29.913398 kubelet[2549]: E0306 01:36:29.913390 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:29.914222 kubelet[2549]: E0306 01:36:29.913802 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:29.914222 kubelet[2549]: W0306 01:36:29.914160 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:29.914222 kubelet[2549]: E0306 01:36:29.914177 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:29.914693 kubelet[2549]: E0306 01:36:29.914591 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:29.914693 kubelet[2549]: W0306 01:36:29.914636 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:29.914693 kubelet[2549]: E0306 01:36:29.914650 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:29.915349 kubelet[2549]: E0306 01:36:29.915144 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:29.915349 kubelet[2549]: W0306 01:36:29.915159 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:29.915349 kubelet[2549]: E0306 01:36:29.915172 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:29.915659 kubelet[2549]: E0306 01:36:29.915608 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:29.915659 kubelet[2549]: W0306 01:36:29.915620 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:29.915659 kubelet[2549]: E0306 01:36:29.915635 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:29.916153 kubelet[2549]: E0306 01:36:29.916116 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:29.916153 kubelet[2549]: W0306 01:36:29.916146 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:29.916249 kubelet[2549]: E0306 01:36:29.916160 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:29.916784 kubelet[2549]: E0306 01:36:29.916599 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:29.916784 kubelet[2549]: W0306 01:36:29.916627 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:29.916784 kubelet[2549]: E0306 01:36:29.916641 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:29.918265 kubelet[2549]: E0306 01:36:29.917228 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:29.918265 kubelet[2549]: W0306 01:36:29.917243 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:29.918265 kubelet[2549]: E0306 01:36:29.917258 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:29.918265 kubelet[2549]: E0306 01:36:29.917697 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:29.918265 kubelet[2549]: W0306 01:36:29.917711 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:29.918265 kubelet[2549]: E0306 01:36:29.917726 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:29.918694 kubelet[2549]: E0306 01:36:29.918301 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:29.918694 kubelet[2549]: W0306 01:36:29.918314 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:29.918694 kubelet[2549]: E0306 01:36:29.918327 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:29.918694 kubelet[2549]: E0306 01:36:29.918690 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:36:29.919087 kubelet[2549]: W0306 01:36:29.918703 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:36:29.919087 kubelet[2549]: E0306 01:36:29.918715 2549 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:36:30.093610 containerd[1459]: time="2026-03-06T01:36:30.093073748Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:36:30.095636 containerd[1459]: time="2026-03-06T01:36:30.095546611Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Mar 6 01:36:30.098148 containerd[1459]: time="2026-03-06T01:36:30.097988297Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:36:30.101353 containerd[1459]: time="2026-03-06T01:36:30.101278889Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:36:30.102096 containerd[1459]: time="2026-03-06T01:36:30.102009998Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.244473362s" Mar 6 01:36:30.102096 containerd[1459]: time="2026-03-06T01:36:30.102073735Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 6 01:36:30.110407 containerd[1459]: time="2026-03-06T01:36:30.110207425Z" level=info msg="CreateContainer within sandbox \"f712b1d58a55c3828b2ebd530733fedc47d5942749ce9d9d58c76ad552225691\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 6 01:36:30.136392 containerd[1459]: time="2026-03-06T01:36:30.136319814Z" level=info msg="CreateContainer within sandbox \"f712b1d58a55c3828b2ebd530733fedc47d5942749ce9d9d58c76ad552225691\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"36b3c57e6abd45168c40790f2b283dae72f74f5ecbf0c10ce53986e69f51ee1a\"" Mar 6 01:36:30.136829 containerd[1459]: time="2026-03-06T01:36:30.136779305Z" level=info msg="StartContainer for \"36b3c57e6abd45168c40790f2b283dae72f74f5ecbf0c10ce53986e69f51ee1a\"" Mar 6 01:36:30.203213 systemd[1]: Started cri-containerd-36b3c57e6abd45168c40790f2b283dae72f74f5ecbf0c10ce53986e69f51ee1a.scope - libcontainer container 36b3c57e6abd45168c40790f2b283dae72f74f5ecbf0c10ce53986e69f51ee1a. Mar 6 01:36:30.292715 containerd[1459]: time="2026-03-06T01:36:30.292556698Z" level=info msg="StartContainer for \"36b3c57e6abd45168c40790f2b283dae72f74f5ecbf0c10ce53986e69f51ee1a\" returns successfully" Mar 6 01:36:30.321401 systemd[1]: cri-containerd-36b3c57e6abd45168c40790f2b283dae72f74f5ecbf0c10ce53986e69f51ee1a.scope: Deactivated successfully. Mar 6 01:36:30.385091 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-36b3c57e6abd45168c40790f2b283dae72f74f5ecbf0c10ce53986e69f51ee1a-rootfs.mount: Deactivated successfully. Mar 6 01:36:30.412145 containerd[1459]: time="2026-03-06T01:36:30.411070033Z" level=info msg="shim disconnected" id=36b3c57e6abd45168c40790f2b283dae72f74f5ecbf0c10ce53986e69f51ee1a namespace=k8s.io Mar 6 01:36:30.412145 containerd[1459]: time="2026-03-06T01:36:30.411838250Z" level=warning msg="cleaning up after shim disconnected" id=36b3c57e6abd45168c40790f2b283dae72f74f5ecbf0c10ce53986e69f51ee1a namespace=k8s.io Mar 6 01:36:30.412145 containerd[1459]: time="2026-03-06T01:36:30.411873115Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 6 01:36:30.472707 containerd[1459]: time="2026-03-06T01:36:30.472240894Z" level=warning msg="cleanup warnings time=\"2026-03-06T01:36:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 6 01:36:30.795061 kubelet[2549]: I0306 01:36:30.795016 2549 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 6 01:36:30.795644 kubelet[2549]: E0306 01:36:30.795630 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:36:30.804224 containerd[1459]: time="2026-03-06T01:36:30.804145200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 6 01:36:31.010204 kubelet[2549]: E0306 01:36:31.009795 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kfs88" podUID="885266c2-0ca6-482d-827c-cc1c88e284cf" Mar 6 01:36:32.998554 kubelet[2549]: E0306 01:36:32.998342 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kfs88" podUID="885266c2-0ca6-482d-827c-cc1c88e284cf" Mar 6 01:36:34.992356 kubelet[2549]: E0306 01:36:34.991803 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kfs88" podUID="885266c2-0ca6-482d-827c-cc1c88e284cf" Mar 6 01:36:36.994268 kubelet[2549]: E0306 01:36:36.994204 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kfs88" podUID="885266c2-0ca6-482d-827c-cc1c88e284cf" Mar 6 01:36:38.993095 kubelet[2549]: E0306 01:36:38.991745 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kfs88" podUID="885266c2-0ca6-482d-827c-cc1c88e284cf" Mar 6 01:36:39.402244 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount114013179.mount: Deactivated successfully. Mar 6 01:36:39.573252 containerd[1459]: time="2026-03-06T01:36:39.571139763Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:36:39.577456 containerd[1459]: time="2026-03-06T01:36:39.576549363Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 6 01:36:39.580533 containerd[1459]: time="2026-03-06T01:36:39.580443596Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:36:39.585838 containerd[1459]: time="2026-03-06T01:36:39.584958438Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:36:39.585838 containerd[1459]: time="2026-03-06T01:36:39.585638214Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 8.781412795s" Mar 6 01:36:39.585838 containerd[1459]: time="2026-03-06T01:36:39.585714416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 6 01:36:39.594847 containerd[1459]: time="2026-03-06T01:36:39.593197770Z" level=info msg="CreateContainer within sandbox \"f712b1d58a55c3828b2ebd530733fedc47d5942749ce9d9d58c76ad552225691\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 6 01:36:39.797201 containerd[1459]: time="2026-03-06T01:36:39.796694065Z" level=info msg="CreateContainer within sandbox \"f712b1d58a55c3828b2ebd530733fedc47d5942749ce9d9d58c76ad552225691\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"eff68a3777e4a8b8c2746aa72bd7269cb1a4f72d66000960be6c834a13eb7adf\"" Mar 6 01:36:39.798073 containerd[1459]: time="2026-03-06T01:36:39.798015457Z" level=info msg="StartContainer for \"eff68a3777e4a8b8c2746aa72bd7269cb1a4f72d66000960be6c834a13eb7adf\"" Mar 6 01:36:39.947637 systemd[1]: Started cri-containerd-eff68a3777e4a8b8c2746aa72bd7269cb1a4f72d66000960be6c834a13eb7adf.scope - libcontainer container eff68a3777e4a8b8c2746aa72bd7269cb1a4f72d66000960be6c834a13eb7adf. Mar 6 01:36:40.070666 containerd[1459]: time="2026-03-06T01:36:40.067670025Z" level=info msg="StartContainer for \"eff68a3777e4a8b8c2746aa72bd7269cb1a4f72d66000960be6c834a13eb7adf\" returns successfully" Mar 6 01:36:40.098582 systemd[1]: cri-containerd-eff68a3777e4a8b8c2746aa72bd7269cb1a4f72d66000960be6c834a13eb7adf.scope: Deactivated successfully. Mar 6 01:36:40.179607 containerd[1459]: time="2026-03-06T01:36:40.179419398Z" level=info msg="shim disconnected" id=eff68a3777e4a8b8c2746aa72bd7269cb1a4f72d66000960be6c834a13eb7adf namespace=k8s.io Mar 6 01:36:40.179607 containerd[1459]: time="2026-03-06T01:36:40.179500118Z" level=warning msg="cleaning up after shim disconnected" id=eff68a3777e4a8b8c2746aa72bd7269cb1a4f72d66000960be6c834a13eb7adf namespace=k8s.io Mar 6 01:36:40.179607 containerd[1459]: time="2026-03-06T01:36:40.179512210Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 6 01:36:40.402381 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eff68a3777e4a8b8c2746aa72bd7269cb1a4f72d66000960be6c834a13eb7adf-rootfs.mount: Deactivated successfully. Mar 6 01:36:40.992010 kubelet[2549]: E0306 01:36:40.991844 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kfs88" podUID="885266c2-0ca6-482d-827c-cc1c88e284cf" Mar 6 01:36:41.110063 containerd[1459]: time="2026-03-06T01:36:41.109957903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 6 01:36:42.998416 kubelet[2549]: E0306 01:36:42.997883 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kfs88" podUID="885266c2-0ca6-482d-827c-cc1c88e284cf" Mar 6 01:36:44.994966 kubelet[2549]: E0306 01:36:44.994581 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kfs88" podUID="885266c2-0ca6-482d-827c-cc1c88e284cf" Mar 6 01:36:45.177241 containerd[1459]: time="2026-03-06T01:36:45.177133234Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:36:45.182744 containerd[1459]: time="2026-03-06T01:36:45.181125007Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 6 01:36:45.185131 containerd[1459]: time="2026-03-06T01:36:45.185062093Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:36:45.189669 containerd[1459]: time="2026-03-06T01:36:45.189606300Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:36:45.190982 containerd[1459]: time="2026-03-06T01:36:45.190882314Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 4.080872083s" Mar 6 01:36:45.191032 containerd[1459]: time="2026-03-06T01:36:45.190992609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 6 01:36:45.213024 containerd[1459]: time="2026-03-06T01:36:45.212432345Z" level=info msg="CreateContainer within sandbox \"f712b1d58a55c3828b2ebd530733fedc47d5942749ce9d9d58c76ad552225691\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 6 01:36:45.244430 containerd[1459]: time="2026-03-06T01:36:45.244246670Z" level=info msg="CreateContainer within sandbox \"f712b1d58a55c3828b2ebd530733fedc47d5942749ce9d9d58c76ad552225691\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3193127888b6fc553786e890995142e892220dad705c947dd3349736c6087ace\"" Mar 6 01:36:45.268535 containerd[1459]: time="2026-03-06T01:36:45.267025330Z" level=info msg="StartContainer for \"3193127888b6fc553786e890995142e892220dad705c947dd3349736c6087ace\"" Mar 6 01:36:45.327085 systemd[1]: Started cri-containerd-3193127888b6fc553786e890995142e892220dad705c947dd3349736c6087ace.scope - libcontainer container 3193127888b6fc553786e890995142e892220dad705c947dd3349736c6087ace. Mar 6 01:36:45.472652 containerd[1459]: time="2026-03-06T01:36:45.472111614Z" level=info msg="StartContainer for \"3193127888b6fc553786e890995142e892220dad705c947dd3349736c6087ace\" returns successfully" Mar 6 01:36:46.834442 systemd[1]: cri-containerd-3193127888b6fc553786e890995142e892220dad705c947dd3349736c6087ace.scope: Deactivated successfully. Mar 6 01:36:46.834966 systemd[1]: cri-containerd-3193127888b6fc553786e890995142e892220dad705c947dd3349736c6087ace.scope: Consumed 1.684s CPU time. Mar 6 01:36:46.879735 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3193127888b6fc553786e890995142e892220dad705c947dd3349736c6087ace-rootfs.mount: Deactivated successfully. Mar 6 01:36:46.883641 containerd[1459]: time="2026-03-06T01:36:46.883536919Z" level=info msg="shim disconnected" id=3193127888b6fc553786e890995142e892220dad705c947dd3349736c6087ace namespace=k8s.io Mar 6 01:36:46.883641 containerd[1459]: time="2026-03-06T01:36:46.883629252Z" level=warning msg="cleaning up after shim disconnected" id=3193127888b6fc553786e890995142e892220dad705c947dd3349736c6087ace namespace=k8s.io Mar 6 01:36:46.883641 containerd[1459]: time="2026-03-06T01:36:46.883643269Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 6 01:36:46.902194 kubelet[2549]: I0306 01:36:46.900999 2549 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 6 01:36:46.981020 systemd[1]: Created slice kubepods-burstable-pod259cb996_fbdd_4a81_b770_165dc5d9d831.slice - libcontainer container kubepods-burstable-pod259cb996_fbdd_4a81_b770_165dc5d9d831.slice. Mar 6 01:36:46.996217 systemd[1]: Created slice kubepods-besteffort-pod9f2da77b_8614_4197_bbb1_1398be46188f.slice - libcontainer container kubepods-besteffort-pod9f2da77b_8614_4197_bbb1_1398be46188f.slice. Mar 6 01:36:47.006071 systemd[1]: Created slice kubepods-besteffort-pod69b227be_3957_4eec_9624_244977470ca6.slice - libcontainer container kubepods-besteffort-pod69b227be_3957_4eec_9624_244977470ca6.slice. Mar 6 01:36:47.018483 systemd[1]: Created slice kubepods-besteffort-podcd4720ad_a0f0_4e45_919b_1e9f495dfc32.slice - libcontainer container kubepods-besteffort-podcd4720ad_a0f0_4e45_919b_1e9f495dfc32.slice. Mar 6 01:36:47.026625 kubelet[2549]: I0306 01:36:47.026486 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/9f2da77b-8614-4197-bbb1-1398be46188f-goldmane-key-pair\") pod \"goldmane-5b85766d88-7867t\" (UID: \"9f2da77b-8614-4197-bbb1-1398be46188f\") " pod="calico-system/goldmane-5b85766d88-7867t" Mar 6 01:36:47.026625 kubelet[2549]: I0306 01:36:47.026566 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f2da77b-8614-4197-bbb1-1398be46188f-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-7867t\" (UID: \"9f2da77b-8614-4197-bbb1-1398be46188f\") " pod="calico-system/goldmane-5b85766d88-7867t" Mar 6 01:36:47.026625 kubelet[2549]: I0306 01:36:47.026604 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f2da77b-8614-4197-bbb1-1398be46188f-config\") pod \"goldmane-5b85766d88-7867t\" (UID: \"9f2da77b-8614-4197-bbb1-1398be46188f\") " pod="calico-system/goldmane-5b85766d88-7867t" Mar 6 01:36:47.026625 kubelet[2549]: I0306 01:36:47.026630 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7qkq\" (UniqueName: \"kubernetes.io/projected/9f2da77b-8614-4197-bbb1-1398be46188f-kube-api-access-z7qkq\") pod \"goldmane-5b85766d88-7867t\" (UID: \"9f2da77b-8614-4197-bbb1-1398be46188f\") " pod="calico-system/goldmane-5b85766d88-7867t" Mar 6 01:36:47.028176 systemd[1]: Created slice kubepods-besteffort-pod50410f0e_4b03_4463_ad7c_49d16c007f3a.slice - libcontainer container kubepods-besteffort-pod50410f0e_4b03_4463_ad7c_49d16c007f3a.slice. Mar 6 01:36:47.039769 systemd[1]: Created slice kubepods-burstable-pod8a67ece0_9c61_4759_9222_15c2c383bab1.slice - libcontainer container kubepods-burstable-pod8a67ece0_9c61_4759_9222_15c2c383bab1.slice. Mar 6 01:36:47.053165 systemd[1]: Created slice kubepods-besteffort-pod1287fe69_fec1_4536_8130_77a9ee5d6e26.slice - libcontainer container kubepods-besteffort-pod1287fe69_fec1_4536_8130_77a9ee5d6e26.slice. Mar 6 01:36:47.059387 systemd[1]: Created slice kubepods-besteffort-pod885266c2_0ca6_482d_827c_cc1c88e284cf.slice - libcontainer container kubepods-besteffort-pod885266c2_0ca6_482d_827c_cc1c88e284cf.slice. Mar 6 01:36:47.064178 containerd[1459]: time="2026-03-06T01:36:47.064104901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kfs88,Uid:885266c2-0ca6-482d-827c-cc1c88e284cf,Namespace:calico-system,Attempt:0,}" Mar 6 01:36:47.129434 kubelet[2549]: I0306 01:36:47.128201 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69b227be-3957-4eec-9624-244977470ca6-tigera-ca-bundle\") pod \"calico-kube-controllers-85b54d65f4-pdsbh\" (UID: \"69b227be-3957-4eec-9624-244977470ca6\") " pod="calico-system/calico-kube-controllers-85b54d65f4-pdsbh" Mar 6 01:36:47.129434 kubelet[2549]: I0306 01:36:47.128253 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/259cb996-fbdd-4a81-b770-165dc5d9d831-config-volume\") pod \"coredns-674b8bbfcf-8w5cr\" (UID: \"259cb996-fbdd-4a81-b770-165dc5d9d831\") " pod="kube-system/coredns-674b8bbfcf-8w5cr" Mar 6 01:36:47.129434 kubelet[2549]: I0306 01:36:47.128426 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/50410f0e-4b03-4463-ad7c-49d16c007f3a-nginx-config\") pod \"whisker-6c5f76998-vkbq8\" (UID: \"50410f0e-4b03-4463-ad7c-49d16c007f3a\") " pod="calico-system/whisker-6c5f76998-vkbq8" Mar 6 01:36:47.129434 kubelet[2549]: I0306 01:36:47.128513 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50410f0e-4b03-4463-ad7c-49d16c007f3a-whisker-ca-bundle\") pod \"whisker-6c5f76998-vkbq8\" (UID: \"50410f0e-4b03-4463-ad7c-49d16c007f3a\") " pod="calico-system/whisker-6c5f76998-vkbq8" Mar 6 01:36:47.129434 kubelet[2549]: I0306 01:36:47.128530 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r68v8\" (UniqueName: \"kubernetes.io/projected/69b227be-3957-4eec-9624-244977470ca6-kube-api-access-r68v8\") pod \"calico-kube-controllers-85b54d65f4-pdsbh\" (UID: \"69b227be-3957-4eec-9624-244977470ca6\") " pod="calico-system/calico-kube-controllers-85b54d65f4-pdsbh" Mar 6 01:36:47.129953 kubelet[2549]: I0306 01:36:47.128549 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqzdf\" (UniqueName: \"kubernetes.io/projected/259cb996-fbdd-4a81-b770-165dc5d9d831-kube-api-access-kqzdf\") pod \"coredns-674b8bbfcf-8w5cr\" (UID: \"259cb996-fbdd-4a81-b770-165dc5d9d831\") " pod="kube-system/coredns-674b8bbfcf-8w5cr" Mar 6 01:36:47.129953 kubelet[2549]: I0306 01:36:47.128563 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p58dk\" (UniqueName: \"kubernetes.io/projected/8a67ece0-9c61-4759-9222-15c2c383bab1-kube-api-access-p58dk\") pod \"coredns-674b8bbfcf-vdjph\" (UID: \"8a67ece0-9c61-4759-9222-15c2c383bab1\") " pod="kube-system/coredns-674b8bbfcf-vdjph" Mar 6 01:36:47.129953 kubelet[2549]: I0306 01:36:47.128595 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1287fe69-fec1-4536-8130-77a9ee5d6e26-calico-apiserver-certs\") pod \"calico-apiserver-6cfdc76bfd-kz2gs\" (UID: \"1287fe69-fec1-4536-8130-77a9ee5d6e26\") " pod="calico-system/calico-apiserver-6cfdc76bfd-kz2gs" Mar 6 01:36:47.129953 kubelet[2549]: I0306 01:36:47.128670 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cd4720ad-a0f0-4e45-919b-1e9f495dfc32-calico-apiserver-certs\") pod \"calico-apiserver-6cfdc76bfd-tpll8\" (UID: \"cd4720ad-a0f0-4e45-919b-1e9f495dfc32\") " pod="calico-system/calico-apiserver-6cfdc76bfd-tpll8" Mar 6 01:36:47.129953 kubelet[2549]: I0306 01:36:47.128788 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/50410f0e-4b03-4463-ad7c-49d16c007f3a-whisker-backend-key-pair\") pod \"whisker-6c5f76998-vkbq8\" (UID: \"50410f0e-4b03-4463-ad7c-49d16c007f3a\") " pod="calico-system/whisker-6c5f76998-vkbq8" Mar 6 01:36:47.130097 kubelet[2549]: I0306 01:36:47.128936 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6qg6\" (UniqueName: \"kubernetes.io/projected/50410f0e-4b03-4463-ad7c-49d16c007f3a-kube-api-access-m6qg6\") pod \"whisker-6c5f76998-vkbq8\" (UID: \"50410f0e-4b03-4463-ad7c-49d16c007f3a\") " pod="calico-system/whisker-6c5f76998-vkbq8" Mar 6 01:36:47.130097 kubelet[2549]: I0306 01:36:47.128964 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn7db\" (UniqueName: \"kubernetes.io/projected/1287fe69-fec1-4536-8130-77a9ee5d6e26-kube-api-access-cn7db\") pod \"calico-apiserver-6cfdc76bfd-kz2gs\" (UID: \"1287fe69-fec1-4536-8130-77a9ee5d6e26\") " pod="calico-system/calico-apiserver-6cfdc76bfd-kz2gs" Mar 6 01:36:47.130097 kubelet[2549]: I0306 01:36:47.128987 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc2kf\" (UniqueName: \"kubernetes.io/projected/cd4720ad-a0f0-4e45-919b-1e9f495dfc32-kube-api-access-mc2kf\") pod \"calico-apiserver-6cfdc76bfd-tpll8\" (UID: \"cd4720ad-a0f0-4e45-919b-1e9f495dfc32\") " pod="calico-system/calico-apiserver-6cfdc76bfd-tpll8" Mar 6 01:36:47.130097 kubelet[2549]: I0306 01:36:47.129054 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a67ece0-9c61-4759-9222-15c2c383bab1-config-volume\") pod \"coredns-674b8bbfcf-vdjph\" (UID: \"8a67ece0-9c61-4759-9222-15c2c383bab1\") " pod="kube-system/coredns-674b8bbfcf-vdjph" Mar 6 01:36:47.261328 containerd[1459]: time="2026-03-06T01:36:47.259257873Z" level=info msg="CreateContainer within sandbox \"f712b1d58a55c3828b2ebd530733fedc47d5942749ce9d9d58c76ad552225691\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 6 01:36:47.287060 kubelet[2549]: E0306 01:36:47.286943 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:36:47.288348 containerd[1459]: time="2026-03-06T01:36:47.288219726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8w5cr,Uid:259cb996-fbdd-4a81-b770-165dc5d9d831,Namespace:kube-system,Attempt:0,}" Mar 6 01:36:47.299028 containerd[1459]: time="2026-03-06T01:36:47.298791929Z" level=info msg="CreateContainer within sandbox \"f712b1d58a55c3828b2ebd530733fedc47d5942749ce9d9d58c76ad552225691\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"86b820aed42cf6a20c35db1f17fdd228f162a87ecb41c5993a239d96a293451e\"" Mar 6 01:36:47.299866 containerd[1459]: time="2026-03-06T01:36:47.299724258Z" level=info msg="StartContainer for \"86b820aed42cf6a20c35db1f17fdd228f162a87ecb41c5993a239d96a293451e\"" Mar 6 01:36:47.303385 containerd[1459]: time="2026-03-06T01:36:47.303121234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-7867t,Uid:9f2da77b-8614-4197-bbb1-1398be46188f,Namespace:calico-system,Attempt:0,}" Mar 6 01:36:47.314463 containerd[1459]: time="2026-03-06T01:36:47.314276434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85b54d65f4-pdsbh,Uid:69b227be-3957-4eec-9624-244977470ca6,Namespace:calico-system,Attempt:0,}" Mar 6 01:36:47.332655 containerd[1459]: time="2026-03-06T01:36:47.328262964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cfdc76bfd-tpll8,Uid:cd4720ad-a0f0-4e45-919b-1e9f495dfc32,Namespace:calico-system,Attempt:0,}" Mar 6 01:36:47.336985 containerd[1459]: time="2026-03-06T01:36:47.336664670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c5f76998-vkbq8,Uid:50410f0e-4b03-4463-ad7c-49d16c007f3a,Namespace:calico-system,Attempt:0,}" Mar 6 01:36:47.347369 kubelet[2549]: E0306 01:36:47.347255 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:36:47.351473 containerd[1459]: time="2026-03-06T01:36:47.351367190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vdjph,Uid:8a67ece0-9c61-4759-9222-15c2c383bab1,Namespace:kube-system,Attempt:0,}" Mar 6 01:36:47.360143 containerd[1459]: time="2026-03-06T01:36:47.360105186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cfdc76bfd-kz2gs,Uid:1287fe69-fec1-4536-8130-77a9ee5d6e26,Namespace:calico-system,Attempt:0,}" Mar 6 01:36:47.370550 containerd[1459]: time="2026-03-06T01:36:47.370378542Z" level=error msg="Failed to destroy network for sandbox \"2fa7a5134a9cdda6e39f4e8123350794c9f7072408a4788bf0e8f08b6cb9c047\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:36:47.372181 containerd[1459]: time="2026-03-06T01:36:47.370877517Z" level=error msg="encountered an error cleaning up failed sandbox \"2fa7a5134a9cdda6e39f4e8123350794c9f7072408a4788bf0e8f08b6cb9c047\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:36:47.372181 containerd[1459]: time="2026-03-06T01:36:47.370998064Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kfs88,Uid:885266c2-0ca6-482d-827c-cc1c88e284cf,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2fa7a5134a9cdda6e39f4e8123350794c9f7072408a4788bf0e8f08b6cb9c047\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:36:47.371183 systemd[1]: Started cri-containerd-86b820aed42cf6a20c35db1f17fdd228f162a87ecb41c5993a239d96a293451e.scope - libcontainer container 86b820aed42cf6a20c35db1f17fdd228f162a87ecb41c5993a239d96a293451e. Mar 6 01:36:47.381643 kubelet[2549]: E0306 01:36:47.380542 2549 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fa7a5134a9cdda6e39f4e8123350794c9f7072408a4788bf0e8f08b6cb9c047\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:36:47.381643 kubelet[2549]: E0306 01:36:47.380612 2549 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fa7a5134a9cdda6e39f4e8123350794c9f7072408a4788bf0e8f08b6cb9c047\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kfs88" Mar 6 01:36:47.381643 kubelet[2549]: E0306 01:36:47.380675 2549 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fa7a5134a9cdda6e39f4e8123350794c9f7072408a4788bf0e8f08b6cb9c047\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kfs88" Mar 6 01:36:47.383111 kubelet[2549]: E0306 01:36:47.380745 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kfs88_calico-system(885266c2-0ca6-482d-827c-cc1c88e284cf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kfs88_calico-system(885266c2-0ca6-482d-827c-cc1c88e284cf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2fa7a5134a9cdda6e39f4e8123350794c9f7072408a4788bf0e8f08b6cb9c047\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kfs88" podUID="885266c2-0ca6-482d-827c-cc1c88e284cf" Mar 6 01:36:47.592106 containerd[1459]: time="2026-03-06T01:36:47.587103820Z" level=error msg="Failed to destroy network for sandbox \"61ef3731639e921884907ff604386cbe2977a828fd679d22edef74560844ed8b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:36:47.592106 containerd[1459]: time="2026-03-06T01:36:47.590413941Z" level=info msg="StartContainer for \"86b820aed42cf6a20c35db1f17fdd228f162a87ecb41c5993a239d96a293451e\" returns successfully" Mar 6 01:36:47.593212 containerd[1459]: time="2026-03-06T01:36:47.593117514Z" level=error msg="encountered an error cleaning up failed sandbox \"61ef3731639e921884907ff604386cbe2977a828fd679d22edef74560844ed8b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:36:47.594643 containerd[1459]: time="2026-03-06T01:36:47.594602319Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-7867t,Uid:9f2da77b-8614-4197-bbb1-1398be46188f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"61ef3731639e921884907ff604386cbe2977a828fd679d22edef74560844ed8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:36:47.595943 kubelet[2549]: E0306 01:36:47.595787 2549 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61ef3731639e921884907ff604386cbe2977a828fd679d22edef74560844ed8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:36:47.596171 kubelet[2549]: E0306 01:36:47.595999 2549 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61ef3731639e921884907ff604386cbe2977a828fd679d22edef74560844ed8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-7867t" Mar 6 01:36:47.596171 kubelet[2549]: E0306 01:36:47.596027 2549 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61ef3731639e921884907ff604386cbe2977a828fd679d22edef74560844ed8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-7867t" Mar 6 01:36:47.596171 kubelet[2549]: E0306 01:36:47.596065 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-7867t_calico-system(9f2da77b-8614-4197-bbb1-1398be46188f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-7867t_calico-system(9f2da77b-8614-4197-bbb1-1398be46188f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"61ef3731639e921884907ff604386cbe2977a828fd679d22edef74560844ed8b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-7867t" podUID="9f2da77b-8614-4197-bbb1-1398be46188f" Mar 6 01:36:47.611567 containerd[1459]: time="2026-03-06T01:36:47.611298206Z" level=error msg="Failed to destroy network for sandbox \"7b778730280a4e6a4013c014a3f3a55c5aea4fbfc9d921fc35ae37cf2b1628e3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:36:47.613092 containerd[1459]: time="2026-03-06T01:36:47.612170473Z" level=error msg="encountered an error cleaning up failed sandbox \"7b778730280a4e6a4013c014a3f3a55c5aea4fbfc9d921fc35ae37cf2b1628e3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:36:47.613092 containerd[1459]: time="2026-03-06T01:36:47.612244462Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8w5cr,Uid:259cb996-fbdd-4a81-b770-165dc5d9d831,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7b778730280a4e6a4013c014a3f3a55c5aea4fbfc9d921fc35ae37cf2b1628e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:36:47.613402 kubelet[2549]: E0306 01:36:47.612562 2549 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b778730280a4e6a4013c014a3f3a55c5aea4fbfc9d921fc35ae37cf2b1628e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:36:47.613402 kubelet[2549]: E0306 01:36:47.612621 2549 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b778730280a4e6a4013c014a3f3a55c5aea4fbfc9d921fc35ae37cf2b1628e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-8w5cr" Mar 6 01:36:47.613402 kubelet[2549]: E0306 01:36:47.612643 2549 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b778730280a4e6a4013c014a3f3a55c5aea4fbfc9d921fc35ae37cf2b1628e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-8w5cr" Mar 6 01:36:47.614038 kubelet[2549]: E0306 01:36:47.612688 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-8w5cr_kube-system(259cb996-fbdd-4a81-b770-165dc5d9d831)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-8w5cr_kube-system(259cb996-fbdd-4a81-b770-165dc5d9d831)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7b778730280a4e6a4013c014a3f3a55c5aea4fbfc9d921fc35ae37cf2b1628e3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-8w5cr" podUID="259cb996-fbdd-4a81-b770-165dc5d9d831" Mar 6 01:36:47.686298 containerd[1459]: time="2026-03-06T01:36:47.686104568Z" level=error msg="Failed to destroy network for sandbox \"176f1350bf8e3bb7ff2eae0e8fe13c97aab8061fc0da31627147312c0dd2ac11\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:36:47.688105 containerd[1459]: time="2026-03-06T01:36:47.688073663Z" level=error msg="encountered an error cleaning up failed sandbox \"176f1350bf8e3bb7ff2eae0e8fe13c97aab8061fc0da31627147312c0dd2ac11\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:36:47.690989 containerd[1459]: time="2026-03-06T01:36:47.688205169Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85b54d65f4-pdsbh,Uid:69b227be-3957-4eec-9624-244977470ca6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"176f1350bf8e3bb7ff2eae0e8fe13c97aab8061fc0da31627147312c0dd2ac11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:36:47.691134 kubelet[2549]: E0306 01:36:47.688527 2549 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"176f1350bf8e3bb7ff2eae0e8fe13c97aab8061fc0da31627147312c0dd2ac11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:36:47.691134 kubelet[2549]: E0306 01:36:47.688604 2549 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"176f1350bf8e3bb7ff2eae0e8fe13c97aab8061fc0da31627147312c0dd2ac11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85b54d65f4-pdsbh" Mar 6 01:36:47.691134 kubelet[2549]: E0306 01:36:47.688635 2549 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"176f1350bf8e3bb7ff2eae0e8fe13c97aab8061fc0da31627147312c0dd2ac11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85b54d65f4-pdsbh" Mar 6 01:36:47.691261 kubelet[2549]: E0306 01:36:47.688685 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-85b54d65f4-pdsbh_calico-system(69b227be-3957-4eec-9624-244977470ca6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-85b54d65f4-pdsbh_calico-system(69b227be-3957-4eec-9624-244977470ca6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"176f1350bf8e3bb7ff2eae0e8fe13c97aab8061fc0da31627147312c0dd2ac11\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85b54d65f4-pdsbh" podUID="69b227be-3957-4eec-9624-244977470ca6" Mar 6 01:36:47.697678 containerd[1459]: time="2026-03-06T01:36:47.697627663Z" level=error msg="Failed to destroy network for sandbox \"f4ddf02f91ed02b9886fe83c045d669d19efac3997df2230a475393ee8aad326\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:36:47.698504 containerd[1459]: time="2026-03-06T01:36:47.698411293Z" level=error msg="encountered an error cleaning up failed sandbox \"f4ddf02f91ed02b9886fe83c045d669d19efac3997df2230a475393ee8aad326\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:36:47.698504 containerd[1459]: time="2026-03-06T01:36:47.698464654Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cfdc76bfd-tpll8,Uid:cd4720ad-a0f0-4e45-919b-1e9f495dfc32,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f4ddf02f91ed02b9886fe83c045d669d19efac3997df2230a475393ee8aad326\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:36:47.699373 kubelet[2549]: E0306 01:36:47.698870 2549 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4ddf02f91ed02b9886fe83c045d669d19efac3997df2230a475393ee8aad326\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:36:47.699373 kubelet[2549]: E0306 01:36:47.698978 2549 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4ddf02f91ed02b9886fe83c045d669d19efac3997df2230a475393ee8aad326\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6cfdc76bfd-tpll8" Mar 6 01:36:47.699373 kubelet[2549]: E0306 01:36:47.698999 2549 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4ddf02f91ed02b9886fe83c045d669d19efac3997df2230a475393ee8aad326\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6cfdc76bfd-tpll8" Mar 6 01:36:47.699545 kubelet[2549]: E0306 01:36:47.699041 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cfdc76bfd-tpll8_calico-system(cd4720ad-a0f0-4e45-919b-1e9f495dfc32)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cfdc76bfd-tpll8_calico-system(cd4720ad-a0f0-4e45-919b-1e9f495dfc32)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f4ddf02f91ed02b9886fe83c045d669d19efac3997df2230a475393ee8aad326\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6cfdc76bfd-tpll8" podUID="cd4720ad-a0f0-4e45-919b-1e9f495dfc32" Mar 6 01:36:47.716812 containerd[1459]: time="2026-03-06T01:36:47.716759048Z" level=error msg="Failed to destroy network for sandbox \"359ff3a5f6de42e9f4238ee55feeefe2f836f1d92e0fe0163218c8fccdc4b0d6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:36:47.717869 containerd[1459]: time="2026-03-06T01:36:47.717763855Z" level=error msg="encountered an error cleaning up failed sandbox \"359ff3a5f6de42e9f4238ee55feeefe2f836f1d92e0fe0163218c8fccdc4b0d6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:36:47.717869 containerd[1459]: time="2026-03-06T01:36:47.717874944Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vdjph,Uid:8a67ece0-9c61-4759-9222-15c2c383bab1,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"359ff3a5f6de42e9f4238ee55feeefe2f836f1d92e0fe0163218c8fccdc4b0d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:36:47.719456 kubelet[2549]: E0306 01:36:47.719370 2549 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"359ff3a5f6de42e9f4238ee55feeefe2f836f1d92e0fe0163218c8fccdc4b0d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:36:47.719559 kubelet[2549]: E0306 01:36:47.719477 2549 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"359ff3a5f6de42e9f4238ee55feeefe2f836f1d92e0fe0163218c8fccdc4b0d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vdjph" Mar 6 01:36:47.719559 kubelet[2549]: E0306 01:36:47.719503 2549 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"359ff3a5f6de42e9f4238ee55feeefe2f836f1d92e0fe0163218c8fccdc4b0d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vdjph" Mar 6 01:36:47.719650 kubelet[2549]: E0306 01:36:47.719566 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-vdjph_kube-system(8a67ece0-9c61-4759-9222-15c2c383bab1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-vdjph_kube-system(8a67ece0-9c61-4759-9222-15c2c383bab1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"359ff3a5f6de42e9f4238ee55feeefe2f836f1d92e0fe0163218c8fccdc4b0d6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-vdjph" podUID="8a67ece0-9c61-4759-9222-15c2c383bab1" Mar 6 01:36:47.734524 containerd[1459]: time="2026-03-06T01:36:47.733666956Z" level=error msg="Failed to destroy network for sandbox \"f4977f8363a44687d97e0b20765c040cc42256700c86d8cc8cfc9041088e848a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:36:47.734524 containerd[1459]: time="2026-03-06T01:36:47.734375466Z" level=error msg="encountered an error cleaning up failed sandbox \"f4977f8363a44687d97e0b20765c040cc42256700c86d8cc8cfc9041088e848a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:36:47.734524 containerd[1459]: time="2026-03-06T01:36:47.734421101Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cfdc76bfd-kz2gs,Uid:1287fe69-fec1-4536-8130-77a9ee5d6e26,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f4977f8363a44687d97e0b20765c040cc42256700c86d8cc8cfc9041088e848a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:36:47.735197 kubelet[2549]: E0306 01:36:47.735074 2549 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4977f8363a44687d97e0b20765c040cc42256700c86d8cc8cfc9041088e848a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:36:47.735947 kubelet[2549]: E0306 01:36:47.735437 2549 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4977f8363a44687d97e0b20765c040cc42256700c86d8cc8cfc9041088e848a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6cfdc76bfd-kz2gs" Mar 6 01:36:47.735947 kubelet[2549]: E0306 01:36:47.735468 2549 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4977f8363a44687d97e0b20765c040cc42256700c86d8cc8cfc9041088e848a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6cfdc76bfd-kz2gs" Mar 6 01:36:47.735947 kubelet[2549]: E0306 01:36:47.735644 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cfdc76bfd-kz2gs_calico-system(1287fe69-fec1-4536-8130-77a9ee5d6e26)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cfdc76bfd-kz2gs_calico-system(1287fe69-fec1-4536-8130-77a9ee5d6e26)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f4977f8363a44687d97e0b20765c040cc42256700c86d8cc8cfc9041088e848a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6cfdc76bfd-kz2gs" podUID="1287fe69-fec1-4536-8130-77a9ee5d6e26" Mar 6 01:36:47.738826 containerd[1459]: time="2026-03-06T01:36:47.738740026Z" level=error msg="Failed to destroy network for sandbox \"82faa401690b9b5be3937546151e2bb04755045086d77b64e55ea069508178b5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:36:47.739717 containerd[1459]: time="2026-03-06T01:36:47.739645787Z" level=error msg="encountered an error cleaning up failed sandbox \"82faa401690b9b5be3937546151e2bb04755045086d77b64e55ea069508178b5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:36:47.739776 containerd[1459]: time="2026-03-06T01:36:47.739733331Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c5f76998-vkbq8,Uid:50410f0e-4b03-4463-ad7c-49d16c007f3a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"82faa401690b9b5be3937546151e2bb04755045086d77b64e55ea069508178b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:36:47.740580 kubelet[2549]: E0306 01:36:47.740302 2549 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82faa401690b9b5be3937546151e2bb04755045086d77b64e55ea069508178b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:36:47.740580 kubelet[2549]: E0306 01:36:47.740490 2549 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82faa401690b9b5be3937546151e2bb04755045086d77b64e55ea069508178b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c5f76998-vkbq8" Mar 6 01:36:47.740580 kubelet[2549]: E0306 01:36:47.740547 2549 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82faa401690b9b5be3937546151e2bb04755045086d77b64e55ea069508178b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c5f76998-vkbq8" Mar 6 01:36:47.740704 kubelet[2549]: E0306 01:36:47.740609 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6c5f76998-vkbq8_calico-system(50410f0e-4b03-4463-ad7c-49d16c007f3a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6c5f76998-vkbq8_calico-system(50410f0e-4b03-4463-ad7c-49d16c007f3a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"82faa401690b9b5be3937546151e2bb04755045086d77b64e55ea069508178b5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6c5f76998-vkbq8" podUID="50410f0e-4b03-4463-ad7c-49d16c007f3a" Mar 6 01:36:47.895777 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2fa7a5134a9cdda6e39f4e8123350794c9f7072408a4788bf0e8f08b6cb9c047-shm.mount: Deactivated successfully. Mar 6 01:36:48.217846 kubelet[2549]: I0306 01:36:48.217250 2549 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82faa401690b9b5be3937546151e2bb04755045086d77b64e55ea069508178b5" Mar 6 01:36:48.219691 kubelet[2549]: I0306 01:36:48.219626 2549 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4ddf02f91ed02b9886fe83c045d669d19efac3997df2230a475393ee8aad326" Mar 6 01:36:48.221543 kubelet[2549]: I0306 01:36:48.221473 2549 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61ef3731639e921884907ff604386cbe2977a828fd679d22edef74560844ed8b" Mar 6 01:36:48.223404 containerd[1459]: time="2026-03-06T01:36:48.223353919Z" level=info msg="StopPodSandbox for \"61ef3731639e921884907ff604386cbe2977a828fd679d22edef74560844ed8b\"" Mar 6 01:36:48.224277 kubelet[2549]: I0306 01:36:48.224223 2549 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b778730280a4e6a4013c014a3f3a55c5aea4fbfc9d921fc35ae37cf2b1628e3" Mar 6 01:36:48.224795 containerd[1459]: time="2026-03-06T01:36:48.224763468Z" level=info msg="StopPodSandbox for \"7b778730280a4e6a4013c014a3f3a55c5aea4fbfc9d921fc35ae37cf2b1628e3\"" Mar 6 01:36:48.227301 containerd[1459]: time="2026-03-06T01:36:48.227119214Z" level=info msg="StopPodSandbox for \"f4ddf02f91ed02b9886fe83c045d669d19efac3997df2230a475393ee8aad326\"" Mar 6 01:36:48.227831 containerd[1459]: time="2026-03-06T01:36:48.227768418Z" level=info msg="StopPodSandbox for \"82faa401690b9b5be3937546151e2bb04755045086d77b64e55ea069508178b5\"" Mar 6 01:36:48.229697 containerd[1459]: time="2026-03-06T01:36:48.229672795Z" level=info msg="Ensure that sandbox f4ddf02f91ed02b9886fe83c045d669d19efac3997df2230a475393ee8aad326 in task-service has been cleanup successfully" Mar 6 01:36:48.230163 containerd[1459]: time="2026-03-06T01:36:48.229739092Z" level=info msg="Ensure that sandbox 61ef3731639e921884907ff604386cbe2977a828fd679d22edef74560844ed8b in task-service has been cleanup successfully" Mar 6 01:36:48.230389 containerd[1459]: time="2026-03-06T01:36:48.230302801Z" level=info msg="Ensure that sandbox 7b778730280a4e6a4013c014a3f3a55c5aea4fbfc9d921fc35ae37cf2b1628e3 in task-service has been cleanup successfully" Mar 6 01:36:48.231352 kubelet[2549]: I0306 01:36:48.231249 2549 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4977f8363a44687d97e0b20765c040cc42256700c86d8cc8cfc9041088e848a" Mar 6 01:36:48.232360 containerd[1459]: time="2026-03-06T01:36:48.232252260Z" level=info msg="StopPodSandbox for \"f4977f8363a44687d97e0b20765c040cc42256700c86d8cc8cfc9041088e848a\"" Mar 6 01:36:48.232623 containerd[1459]: time="2026-03-06T01:36:48.232575927Z" level=info msg="Ensure that sandbox f4977f8363a44687d97e0b20765c040cc42256700c86d8cc8cfc9041088e848a in task-service has been cleanup successfully" Mar 6 01:36:48.233416 containerd[1459]: time="2026-03-06T01:36:48.229834183Z" level=info msg="Ensure that sandbox 82faa401690b9b5be3937546151e2bb04755045086d77b64e55ea069508178b5 in task-service has been cleanup successfully" Mar 6 01:36:48.238415 kubelet[2549]: I0306 01:36:48.238047 2549 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="176f1350bf8e3bb7ff2eae0e8fe13c97aab8061fc0da31627147312c0dd2ac11" Mar 6 01:36:48.238728 containerd[1459]: time="2026-03-06T01:36:48.238584084Z" level=info msg="StopPodSandbox for \"176f1350bf8e3bb7ff2eae0e8fe13c97aab8061fc0da31627147312c0dd2ac11\"" Mar 6 01:36:48.238836 containerd[1459]: time="2026-03-06T01:36:48.238796823Z" level=info msg="Ensure that sandbox 176f1350bf8e3bb7ff2eae0e8fe13c97aab8061fc0da31627147312c0dd2ac11 in task-service has been cleanup successfully" Mar 6 01:36:48.244835 kubelet[2549]: I0306 01:36:48.244679 2549 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fa7a5134a9cdda6e39f4e8123350794c9f7072408a4788bf0e8f08b6cb9c047" Mar 6 01:36:48.246066 containerd[1459]: time="2026-03-06T01:36:48.245996138Z" level=info msg="StopPodSandbox for \"2fa7a5134a9cdda6e39f4e8123350794c9f7072408a4788bf0e8f08b6cb9c047\"" Mar 6 01:36:48.246173 containerd[1459]: time="2026-03-06T01:36:48.246154045Z" level=info msg="Ensure that sandbox 2fa7a5134a9cdda6e39f4e8123350794c9f7072408a4788bf0e8f08b6cb9c047 in task-service has been cleanup successfully" Mar 6 01:36:48.264168 kubelet[2549]: I0306 01:36:48.264091 2549 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="359ff3a5f6de42e9f4238ee55feeefe2f836f1d92e0fe0163218c8fccdc4b0d6" Mar 6 01:36:48.265726 containerd[1459]: time="2026-03-06T01:36:48.265247872Z" level=info msg="StopPodSandbox for \"359ff3a5f6de42e9f4238ee55feeefe2f836f1d92e0fe0163218c8fccdc4b0d6\"" Mar 6 01:36:48.265726 containerd[1459]: time="2026-03-06T01:36:48.265477692Z" level=info msg="Ensure that sandbox 359ff3a5f6de42e9f4238ee55feeefe2f836f1d92e0fe0163218c8fccdc4b0d6 in task-service has been cleanup successfully" Mar 6 01:36:48.294612 kubelet[2549]: I0306 01:36:48.293796 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9ll8t" podStartSLOduration=4.373573138 podStartE2EDuration="22.293781347s" podCreationTimestamp="2026-03-06 01:36:26 +0000 UTC" firstStartedPulling="2026-03-06 01:36:27.275219593 +0000 UTC m=+20.496164870" lastFinishedPulling="2026-03-06 01:36:45.195427783 +0000 UTC m=+38.416373079" observedRunningTime="2026-03-06 01:36:48.293446399 +0000 UTC m=+41.514391685" watchObservedRunningTime="2026-03-06 01:36:48.293781347 +0000 UTC m=+41.514726624" Mar 6 01:36:48.622966 containerd[1459]: 2026-03-06 01:36:48.399 [INFO][3760] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f4977f8363a44687d97e0b20765c040cc42256700c86d8cc8cfc9041088e848a" Mar 6 01:36:48.622966 containerd[1459]: 2026-03-06 01:36:48.399 [INFO][3760] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f4977f8363a44687d97e0b20765c040cc42256700c86d8cc8cfc9041088e848a" iface="eth0" netns="/var/run/netns/cni-c1a224eb-99fc-fcf6-11f6-7dd925f131b1" Mar 6 01:36:48.622966 containerd[1459]: 2026-03-06 01:36:48.401 [INFO][3760] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f4977f8363a44687d97e0b20765c040cc42256700c86d8cc8cfc9041088e848a" iface="eth0" netns="/var/run/netns/cni-c1a224eb-99fc-fcf6-11f6-7dd925f131b1" Mar 6 01:36:48.622966 containerd[1459]: 2026-03-06 01:36:48.402 [INFO][3760] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f4977f8363a44687d97e0b20765c040cc42256700c86d8cc8cfc9041088e848a" iface="eth0" netns="/var/run/netns/cni-c1a224eb-99fc-fcf6-11f6-7dd925f131b1" Mar 6 01:36:48.622966 containerd[1459]: 2026-03-06 01:36:48.403 [INFO][3760] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f4977f8363a44687d97e0b20765c040cc42256700c86d8cc8cfc9041088e848a" Mar 6 01:36:48.622966 containerd[1459]: 2026-03-06 01:36:48.403 [INFO][3760] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f4977f8363a44687d97e0b20765c040cc42256700c86d8cc8cfc9041088e848a" Mar 6 01:36:48.622966 containerd[1459]: 2026-03-06 01:36:48.551 [INFO][3872] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f4977f8363a44687d97e0b20765c040cc42256700c86d8cc8cfc9041088e848a" HandleID="k8s-pod-network.f4977f8363a44687d97e0b20765c040cc42256700c86d8cc8cfc9041088e848a" Workload="localhost-k8s-calico--apiserver--6cfdc76bfd--kz2gs-eth0" Mar 6 01:36:48.622966 containerd[1459]: 2026-03-06 01:36:48.560 [INFO][3872] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:36:48.622966 containerd[1459]: 2026-03-06 01:36:48.560 [INFO][3872] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:36:48.622966 containerd[1459]: 2026-03-06 01:36:48.592 [WARNING][3872] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f4977f8363a44687d97e0b20765c040cc42256700c86d8cc8cfc9041088e848a" HandleID="k8s-pod-network.f4977f8363a44687d97e0b20765c040cc42256700c86d8cc8cfc9041088e848a" Workload="localhost-k8s-calico--apiserver--6cfdc76bfd--kz2gs-eth0" Mar 6 01:36:48.622966 containerd[1459]: 2026-03-06 01:36:48.592 [INFO][3872] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f4977f8363a44687d97e0b20765c040cc42256700c86d8cc8cfc9041088e848a" HandleID="k8s-pod-network.f4977f8363a44687d97e0b20765c040cc42256700c86d8cc8cfc9041088e848a" Workload="localhost-k8s-calico--apiserver--6cfdc76bfd--kz2gs-eth0" Mar 6 01:36:48.622966 containerd[1459]: 2026-03-06 01:36:48.595 [INFO][3872] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:36:48.622966 containerd[1459]: 2026-03-06 01:36:48.615 [INFO][3760] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f4977f8363a44687d97e0b20765c040cc42256700c86d8cc8cfc9041088e848a" Mar 6 01:36:48.624943 containerd[1459]: time="2026-03-06T01:36:48.624614785Z" level=info msg="TearDown network for sandbox \"f4977f8363a44687d97e0b20765c040cc42256700c86d8cc8cfc9041088e848a\" successfully" Mar 6 01:36:48.629005 containerd[1459]: time="2026-03-06T01:36:48.625075369Z" level=info msg="StopPodSandbox for \"f4977f8363a44687d97e0b20765c040cc42256700c86d8cc8cfc9041088e848a\" returns successfully" Mar 6 01:36:48.629619 systemd[1]: run-netns-cni\x2dc1a224eb\x2d99fc\x2dfcf6\x2d11f6\x2d7dd925f131b1.mount: Deactivated successfully. Mar 6 01:36:48.635569 containerd[1459]: time="2026-03-06T01:36:48.635495093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cfdc76bfd-kz2gs,Uid:1287fe69-fec1-4536-8130-77a9ee5d6e26,Namespace:calico-system,Attempt:1,}" Mar 6 01:36:48.651574 containerd[1459]: 2026-03-06 01:36:48.409 [INFO][3782] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f4ddf02f91ed02b9886fe83c045d669d19efac3997df2230a475393ee8aad326" Mar 6 01:36:48.651574 containerd[1459]: 2026-03-06 01:36:48.409 [INFO][3782] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f4ddf02f91ed02b9886fe83c045d669d19efac3997df2230a475393ee8aad326" iface="eth0" netns="/var/run/netns/cni-34c12006-e2f5-21cb-9798-858ba691a824" Mar 6 01:36:48.651574 containerd[1459]: 2026-03-06 01:36:48.412 [INFO][3782] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f4ddf02f91ed02b9886fe83c045d669d19efac3997df2230a475393ee8aad326" iface="eth0" netns="/var/run/netns/cni-34c12006-e2f5-21cb-9798-858ba691a824" Mar 6 01:36:48.651574 containerd[1459]: 2026-03-06 01:36:48.428 [INFO][3782] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f4ddf02f91ed02b9886fe83c045d669d19efac3997df2230a475393ee8aad326" iface="eth0" netns="/var/run/netns/cni-34c12006-e2f5-21cb-9798-858ba691a824" Mar 6 01:36:48.651574 containerd[1459]: 2026-03-06 01:36:48.431 [INFO][3782] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f4ddf02f91ed02b9886fe83c045d669d19efac3997df2230a475393ee8aad326" Mar 6 01:36:48.651574 containerd[1459]: 2026-03-06 01:36:48.431 [INFO][3782] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f4ddf02f91ed02b9886fe83c045d669d19efac3997df2230a475393ee8aad326" Mar 6 01:36:48.651574 containerd[1459]: 2026-03-06 01:36:48.601 [INFO][3884] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f4ddf02f91ed02b9886fe83c045d669d19efac3997df2230a475393ee8aad326" HandleID="k8s-pod-network.f4ddf02f91ed02b9886fe83c045d669d19efac3997df2230a475393ee8aad326" Workload="localhost-k8s-calico--apiserver--6cfdc76bfd--tpll8-eth0" Mar 6 01:36:48.651574 containerd[1459]: 2026-03-06 01:36:48.601 [INFO][3884] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:36:48.651574 containerd[1459]: 2026-03-06 01:36:48.601 [INFO][3884] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:36:48.651574 containerd[1459]: 2026-03-06 01:36:48.611 [WARNING][3884] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f4ddf02f91ed02b9886fe83c045d669d19efac3997df2230a475393ee8aad326" HandleID="k8s-pod-network.f4ddf02f91ed02b9886fe83c045d669d19efac3997df2230a475393ee8aad326" Workload="localhost-k8s-calico--apiserver--6cfdc76bfd--tpll8-eth0" Mar 6 01:36:48.651574 containerd[1459]: 2026-03-06 01:36:48.611 [INFO][3884] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f4ddf02f91ed02b9886fe83c045d669d19efac3997df2230a475393ee8aad326" HandleID="k8s-pod-network.f4ddf02f91ed02b9886fe83c045d669d19efac3997df2230a475393ee8aad326" Workload="localhost-k8s-calico--apiserver--6cfdc76bfd--tpll8-eth0" Mar 6 01:36:48.651574 containerd[1459]: 2026-03-06 01:36:48.616 [INFO][3884] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:36:48.651574 containerd[1459]: 2026-03-06 01:36:48.633 [INFO][3782] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f4ddf02f91ed02b9886fe83c045d669d19efac3997df2230a475393ee8aad326" Mar 6 01:36:48.655878 containerd[1459]: time="2026-03-06T01:36:48.652254505Z" level=info msg="TearDown network for sandbox \"f4ddf02f91ed02b9886fe83c045d669d19efac3997df2230a475393ee8aad326\" successfully" Mar 6 01:36:48.655878 containerd[1459]: time="2026-03-06T01:36:48.652285734Z" level=info msg="StopPodSandbox for \"f4ddf02f91ed02b9886fe83c045d669d19efac3997df2230a475393ee8aad326\" returns successfully" Mar 6 01:36:48.658572 systemd[1]: run-netns-cni\x2d34c12006\x2de2f5\x2d21cb\x2d9798\x2d858ba691a824.mount: Deactivated successfully. Mar 6 01:36:48.662579 containerd[1459]: time="2026-03-06T01:36:48.661742090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cfdc76bfd-tpll8,Uid:cd4720ad-a0f0-4e45-919b-1e9f495dfc32,Namespace:calico-system,Attempt:1,}" Mar 6 01:36:48.670784 containerd[1459]: 2026-03-06 01:36:48.471 [INFO][3777] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="82faa401690b9b5be3937546151e2bb04755045086d77b64e55ea069508178b5" Mar 6 01:36:48.670784 containerd[1459]: 2026-03-06 01:36:48.473 [INFO][3777] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="82faa401690b9b5be3937546151e2bb04755045086d77b64e55ea069508178b5" iface="eth0" netns="/var/run/netns/cni-24ed05ce-7613-1891-ed8f-9e066501e8a0" Mar 6 01:36:48.670784 containerd[1459]: 2026-03-06 01:36:48.474 [INFO][3777] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="82faa401690b9b5be3937546151e2bb04755045086d77b64e55ea069508178b5" iface="eth0" netns="/var/run/netns/cni-24ed05ce-7613-1891-ed8f-9e066501e8a0" Mar 6 01:36:48.670784 containerd[1459]: 2026-03-06 01:36:48.475 [INFO][3777] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="82faa401690b9b5be3937546151e2bb04755045086d77b64e55ea069508178b5" iface="eth0" netns="/var/run/netns/cni-24ed05ce-7613-1891-ed8f-9e066501e8a0" Mar 6 01:36:48.670784 containerd[1459]: 2026-03-06 01:36:48.475 [INFO][3777] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="82faa401690b9b5be3937546151e2bb04755045086d77b64e55ea069508178b5" Mar 6 01:36:48.670784 containerd[1459]: 2026-03-06 01:36:48.476 [INFO][3777] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="82faa401690b9b5be3937546151e2bb04755045086d77b64e55ea069508178b5" Mar 6 01:36:48.670784 containerd[1459]: 2026-03-06 01:36:48.603 [INFO][3911] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="82faa401690b9b5be3937546151e2bb04755045086d77b64e55ea069508178b5" HandleID="k8s-pod-network.82faa401690b9b5be3937546151e2bb04755045086d77b64e55ea069508178b5" Workload="localhost-k8s-whisker--6c5f76998--vkbq8-eth0" Mar 6 01:36:48.670784 containerd[1459]: 2026-03-06 01:36:48.603 [INFO][3911] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:36:48.670784 containerd[1459]: 2026-03-06 01:36:48.616 [INFO][3911] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:36:48.670784 containerd[1459]: 2026-03-06 01:36:48.628 [WARNING][3911] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="82faa401690b9b5be3937546151e2bb04755045086d77b64e55ea069508178b5" HandleID="k8s-pod-network.82faa401690b9b5be3937546151e2bb04755045086d77b64e55ea069508178b5" Workload="localhost-k8s-whisker--6c5f76998--vkbq8-eth0" Mar 6 01:36:48.670784 containerd[1459]: 2026-03-06 01:36:48.628 [INFO][3911] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="82faa401690b9b5be3937546151e2bb04755045086d77b64e55ea069508178b5" HandleID="k8s-pod-network.82faa401690b9b5be3937546151e2bb04755045086d77b64e55ea069508178b5" Workload="localhost-k8s-whisker--6c5f76998--vkbq8-eth0" Mar 6 01:36:48.670784 containerd[1459]: 2026-03-06 01:36:48.637 [INFO][3911] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:36:48.670784 containerd[1459]: 2026-03-06 01:36:48.650 [INFO][3777] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="82faa401690b9b5be3937546151e2bb04755045086d77b64e55ea069508178b5" Mar 6 01:36:48.671569 containerd[1459]: time="2026-03-06T01:36:48.671092524Z" level=info msg="TearDown network for sandbox \"82faa401690b9b5be3937546151e2bb04755045086d77b64e55ea069508178b5\" successfully" Mar 6 01:36:48.671569 containerd[1459]: time="2026-03-06T01:36:48.671118061Z" level=info msg="StopPodSandbox for \"82faa401690b9b5be3937546151e2bb04755045086d77b64e55ea069508178b5\" returns successfully" Mar 6 01:36:48.682383 containerd[1459]: 2026-03-06 01:36:48.419 [INFO][3813] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2fa7a5134a9cdda6e39f4e8123350794c9f7072408a4788bf0e8f08b6cb9c047" Mar 6 01:36:48.682383 containerd[1459]: 2026-03-06 01:36:48.420 [INFO][3813] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2fa7a5134a9cdda6e39f4e8123350794c9f7072408a4788bf0e8f08b6cb9c047" iface="eth0" netns="/var/run/netns/cni-71a43ebe-f6dc-a03c-5ea0-33a35a1edc82" Mar 6 01:36:48.682383 containerd[1459]: 2026-03-06 01:36:48.421 [INFO][3813] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2fa7a5134a9cdda6e39f4e8123350794c9f7072408a4788bf0e8f08b6cb9c047" iface="eth0" netns="/var/run/netns/cni-71a43ebe-f6dc-a03c-5ea0-33a35a1edc82" Mar 6 01:36:48.682383 containerd[1459]: 2026-03-06 01:36:48.421 [INFO][3813] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2fa7a5134a9cdda6e39f4e8123350794c9f7072408a4788bf0e8f08b6cb9c047" iface="eth0" netns="/var/run/netns/cni-71a43ebe-f6dc-a03c-5ea0-33a35a1edc82" Mar 6 01:36:48.682383 containerd[1459]: 2026-03-06 01:36:48.421 [INFO][3813] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2fa7a5134a9cdda6e39f4e8123350794c9f7072408a4788bf0e8f08b6cb9c047" Mar 6 01:36:48.682383 containerd[1459]: 2026-03-06 01:36:48.421 [INFO][3813] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2fa7a5134a9cdda6e39f4e8123350794c9f7072408a4788bf0e8f08b6cb9c047" Mar 6 01:36:48.682383 containerd[1459]: 2026-03-06 01:36:48.611 [INFO][3882] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2fa7a5134a9cdda6e39f4e8123350794c9f7072408a4788bf0e8f08b6cb9c047" HandleID="k8s-pod-network.2fa7a5134a9cdda6e39f4e8123350794c9f7072408a4788bf0e8f08b6cb9c047" Workload="localhost-k8s-csi--node--driver--kfs88-eth0" Mar 6 01:36:48.682383 containerd[1459]: 2026-03-06 01:36:48.612 [INFO][3882] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:36:48.682383 containerd[1459]: 2026-03-06 01:36:48.637 [INFO][3882] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:36:48.682383 containerd[1459]: 2026-03-06 01:36:48.650 [WARNING][3882] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2fa7a5134a9cdda6e39f4e8123350794c9f7072408a4788bf0e8f08b6cb9c047" HandleID="k8s-pod-network.2fa7a5134a9cdda6e39f4e8123350794c9f7072408a4788bf0e8f08b6cb9c047" Workload="localhost-k8s-csi--node--driver--kfs88-eth0" Mar 6 01:36:48.682383 containerd[1459]: 2026-03-06 01:36:48.651 [INFO][3882] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2fa7a5134a9cdda6e39f4e8123350794c9f7072408a4788bf0e8f08b6cb9c047" HandleID="k8s-pod-network.2fa7a5134a9cdda6e39f4e8123350794c9f7072408a4788bf0e8f08b6cb9c047" Workload="localhost-k8s-csi--node--driver--kfs88-eth0" Mar 6 01:36:48.682383 containerd[1459]: 2026-03-06 01:36:48.657 [INFO][3882] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:36:48.682383 containerd[1459]: 2026-03-06 01:36:48.674 [INFO][3813] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2fa7a5134a9cdda6e39f4e8123350794c9f7072408a4788bf0e8f08b6cb9c047" Mar 6 01:36:48.683235 containerd[1459]: time="2026-03-06T01:36:48.682959685Z" level=info msg="TearDown network for sandbox \"2fa7a5134a9cdda6e39f4e8123350794c9f7072408a4788bf0e8f08b6cb9c047\" successfully" Mar 6 01:36:48.683235 containerd[1459]: time="2026-03-06T01:36:48.682988239Z" level=info msg="StopPodSandbox for \"2fa7a5134a9cdda6e39f4e8123350794c9f7072408a4788bf0e8f08b6cb9c047\" returns successfully" Mar 6 01:36:48.684306 containerd[1459]: time="2026-03-06T01:36:48.683941268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kfs88,Uid:885266c2-0ca6-482d-827c-cc1c88e284cf,Namespace:calico-system,Attempt:1,}" Mar 6 01:36:48.685078 containerd[1459]: 2026-03-06 01:36:48.407 [INFO][3823] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="176f1350bf8e3bb7ff2eae0e8fe13c97aab8061fc0da31627147312c0dd2ac11" Mar 6 01:36:48.685078 containerd[1459]: 2026-03-06 01:36:48.407 [INFO][3823] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="176f1350bf8e3bb7ff2eae0e8fe13c97aab8061fc0da31627147312c0dd2ac11" iface="eth0" netns="/var/run/netns/cni-196be776-3934-d036-54e9-29d194882468" Mar 6 01:36:48.685078 containerd[1459]: 2026-03-06 01:36:48.414 [INFO][3823] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="176f1350bf8e3bb7ff2eae0e8fe13c97aab8061fc0da31627147312c0dd2ac11" iface="eth0" netns="/var/run/netns/cni-196be776-3934-d036-54e9-29d194882468" Mar 6 01:36:48.685078 containerd[1459]: 2026-03-06 01:36:48.419 [INFO][3823] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="176f1350bf8e3bb7ff2eae0e8fe13c97aab8061fc0da31627147312c0dd2ac11" iface="eth0" netns="/var/run/netns/cni-196be776-3934-d036-54e9-29d194882468" Mar 6 01:36:48.685078 containerd[1459]: 2026-03-06 01:36:48.419 [INFO][3823] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="176f1350bf8e3bb7ff2eae0e8fe13c97aab8061fc0da31627147312c0dd2ac11" Mar 6 01:36:48.685078 containerd[1459]: 2026-03-06 01:36:48.420 [INFO][3823] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="176f1350bf8e3bb7ff2eae0e8fe13c97aab8061fc0da31627147312c0dd2ac11" Mar 6 01:36:48.685078 containerd[1459]: 2026-03-06 01:36:48.630 [INFO][3880] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="176f1350bf8e3bb7ff2eae0e8fe13c97aab8061fc0da31627147312c0dd2ac11" HandleID="k8s-pod-network.176f1350bf8e3bb7ff2eae0e8fe13c97aab8061fc0da31627147312c0dd2ac11" Workload="localhost-k8s-calico--kube--controllers--85b54d65f4--pdsbh-eth0" Mar 6 01:36:48.685078 containerd[1459]: 2026-03-06 01:36:48.630 [INFO][3880] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:36:48.685078 containerd[1459]: 2026-03-06 01:36:48.654 [INFO][3880] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:36:48.685078 containerd[1459]: 2026-03-06 01:36:48.663 [WARNING][3880] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="176f1350bf8e3bb7ff2eae0e8fe13c97aab8061fc0da31627147312c0dd2ac11" HandleID="k8s-pod-network.176f1350bf8e3bb7ff2eae0e8fe13c97aab8061fc0da31627147312c0dd2ac11" Workload="localhost-k8s-calico--kube--controllers--85b54d65f4--pdsbh-eth0" Mar 6 01:36:48.685078 containerd[1459]: 2026-03-06 01:36:48.663 [INFO][3880] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="176f1350bf8e3bb7ff2eae0e8fe13c97aab8061fc0da31627147312c0dd2ac11" HandleID="k8s-pod-network.176f1350bf8e3bb7ff2eae0e8fe13c97aab8061fc0da31627147312c0dd2ac11" Workload="localhost-k8s-calico--kube--controllers--85b54d65f4--pdsbh-eth0" Mar 6 01:36:48.685078 containerd[1459]: 2026-03-06 01:36:48.670 [INFO][3880] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:36:48.685078 containerd[1459]: 2026-03-06 01:36:48.675 [INFO][3823] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="176f1350bf8e3bb7ff2eae0e8fe13c97aab8061fc0da31627147312c0dd2ac11" Mar 6 01:36:48.686398 containerd[1459]: time="2026-03-06T01:36:48.685881566Z" level=info msg="TearDown network for sandbox \"176f1350bf8e3bb7ff2eae0e8fe13c97aab8061fc0da31627147312c0dd2ac11\" successfully" Mar 6 01:36:48.686398 containerd[1459]: time="2026-03-06T01:36:48.685940046Z" level=info msg="StopPodSandbox for \"176f1350bf8e3bb7ff2eae0e8fe13c97aab8061fc0da31627147312c0dd2ac11\" returns successfully" Mar 6 01:36:48.687126 containerd[1459]: time="2026-03-06T01:36:48.686683050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85b54d65f4-pdsbh,Uid:69b227be-3957-4eec-9624-244977470ca6,Namespace:calico-system,Attempt:1,}" Mar 6 01:36:48.730171 containerd[1459]: 2026-03-06 01:36:48.525 [INFO][3759] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="61ef3731639e921884907ff604386cbe2977a828fd679d22edef74560844ed8b" Mar 6 01:36:48.730171 containerd[1459]: 2026-03-06 01:36:48.525 [INFO][3759] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="61ef3731639e921884907ff604386cbe2977a828fd679d22edef74560844ed8b" iface="eth0" netns="/var/run/netns/cni-81f11f19-987d-d229-4915-af60da1e6a49" Mar 6 01:36:48.730171 containerd[1459]: 2026-03-06 01:36:48.526 [INFO][3759] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="61ef3731639e921884907ff604386cbe2977a828fd679d22edef74560844ed8b" iface="eth0" netns="/var/run/netns/cni-81f11f19-987d-d229-4915-af60da1e6a49" Mar 6 01:36:48.730171 containerd[1459]: 2026-03-06 01:36:48.526 [INFO][3759] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="61ef3731639e921884907ff604386cbe2977a828fd679d22edef74560844ed8b" iface="eth0" netns="/var/run/netns/cni-81f11f19-987d-d229-4915-af60da1e6a49" Mar 6 01:36:48.730171 containerd[1459]: 2026-03-06 01:36:48.526 [INFO][3759] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="61ef3731639e921884907ff604386cbe2977a828fd679d22edef74560844ed8b" Mar 6 01:36:48.730171 containerd[1459]: 2026-03-06 01:36:48.526 [INFO][3759] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="61ef3731639e921884907ff604386cbe2977a828fd679d22edef74560844ed8b" Mar 6 01:36:48.730171 containerd[1459]: 2026-03-06 01:36:48.703 [INFO][3927] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="61ef3731639e921884907ff604386cbe2977a828fd679d22edef74560844ed8b" HandleID="k8s-pod-network.61ef3731639e921884907ff604386cbe2977a828fd679d22edef74560844ed8b" Workload="localhost-k8s-goldmane--5b85766d88--7867t-eth0" Mar 6 01:36:48.730171 containerd[1459]: 2026-03-06 01:36:48.703 [INFO][3927] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:36:48.730171 containerd[1459]: 2026-03-06 01:36:48.704 [INFO][3927] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:36:48.730171 containerd[1459]: 2026-03-06 01:36:48.715 [WARNING][3927] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="61ef3731639e921884907ff604386cbe2977a828fd679d22edef74560844ed8b" HandleID="k8s-pod-network.61ef3731639e921884907ff604386cbe2977a828fd679d22edef74560844ed8b" Workload="localhost-k8s-goldmane--5b85766d88--7867t-eth0" Mar 6 01:36:48.730171 containerd[1459]: 2026-03-06 01:36:48.715 [INFO][3927] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="61ef3731639e921884907ff604386cbe2977a828fd679d22edef74560844ed8b" HandleID="k8s-pod-network.61ef3731639e921884907ff604386cbe2977a828fd679d22edef74560844ed8b" Workload="localhost-k8s-goldmane--5b85766d88--7867t-eth0" Mar 6 01:36:48.730171 containerd[1459]: 2026-03-06 01:36:48.718 [INFO][3927] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:36:48.730171 containerd[1459]: 2026-03-06 01:36:48.720 [INFO][3759] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="61ef3731639e921884907ff604386cbe2977a828fd679d22edef74560844ed8b" Mar 6 01:36:48.735172 containerd[1459]: time="2026-03-06T01:36:48.735063263Z" level=info msg="TearDown network for sandbox \"61ef3731639e921884907ff604386cbe2977a828fd679d22edef74560844ed8b\" successfully" Mar 6 01:36:48.735172 containerd[1459]: time="2026-03-06T01:36:48.735092548Z" level=info msg="StopPodSandbox for \"61ef3731639e921884907ff604386cbe2977a828fd679d22edef74560844ed8b\" returns successfully" Mar 6 01:36:48.736101 containerd[1459]: time="2026-03-06T01:36:48.735802926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-7867t,Uid:9f2da77b-8614-4197-bbb1-1398be46188f,Namespace:calico-system,Attempt:1,}" Mar 6 01:36:48.760499 containerd[1459]: 2026-03-06 01:36:48.496 [INFO][3824] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7b778730280a4e6a4013c014a3f3a55c5aea4fbfc9d921fc35ae37cf2b1628e3" Mar 6 01:36:48.760499 containerd[1459]: 2026-03-06 01:36:48.496 [INFO][3824] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7b778730280a4e6a4013c014a3f3a55c5aea4fbfc9d921fc35ae37cf2b1628e3" iface="eth0" netns="/var/run/netns/cni-f29bb99b-b586-15c2-b20d-6c5d6e363562" Mar 6 01:36:48.760499 containerd[1459]: 2026-03-06 01:36:48.497 [INFO][3824] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7b778730280a4e6a4013c014a3f3a55c5aea4fbfc9d921fc35ae37cf2b1628e3" iface="eth0" netns="/var/run/netns/cni-f29bb99b-b586-15c2-b20d-6c5d6e363562" Mar 6 01:36:48.760499 containerd[1459]: 2026-03-06 01:36:48.497 [INFO][3824] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7b778730280a4e6a4013c014a3f3a55c5aea4fbfc9d921fc35ae37cf2b1628e3" iface="eth0" netns="/var/run/netns/cni-f29bb99b-b586-15c2-b20d-6c5d6e363562" Mar 6 01:36:48.760499 containerd[1459]: 2026-03-06 01:36:48.497 [INFO][3824] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7b778730280a4e6a4013c014a3f3a55c5aea4fbfc9d921fc35ae37cf2b1628e3" Mar 6 01:36:48.760499 containerd[1459]: 2026-03-06 01:36:48.497 [INFO][3824] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7b778730280a4e6a4013c014a3f3a55c5aea4fbfc9d921fc35ae37cf2b1628e3" Mar 6 01:36:48.760499 containerd[1459]: 2026-03-06 01:36:48.711 [INFO][3918] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7b778730280a4e6a4013c014a3f3a55c5aea4fbfc9d921fc35ae37cf2b1628e3" HandleID="k8s-pod-network.7b778730280a4e6a4013c014a3f3a55c5aea4fbfc9d921fc35ae37cf2b1628e3" Workload="localhost-k8s-coredns--674b8bbfcf--8w5cr-eth0" Mar 6 01:36:48.760499 containerd[1459]: 2026-03-06 01:36:48.711 [INFO][3918] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:36:48.760499 containerd[1459]: 2026-03-06 01:36:48.718 [INFO][3918] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:36:48.760499 containerd[1459]: 2026-03-06 01:36:48.731 [WARNING][3918] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7b778730280a4e6a4013c014a3f3a55c5aea4fbfc9d921fc35ae37cf2b1628e3" HandleID="k8s-pod-network.7b778730280a4e6a4013c014a3f3a55c5aea4fbfc9d921fc35ae37cf2b1628e3" Workload="localhost-k8s-coredns--674b8bbfcf--8w5cr-eth0" Mar 6 01:36:48.760499 containerd[1459]: 2026-03-06 01:36:48.731 [INFO][3918] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7b778730280a4e6a4013c014a3f3a55c5aea4fbfc9d921fc35ae37cf2b1628e3" HandleID="k8s-pod-network.7b778730280a4e6a4013c014a3f3a55c5aea4fbfc9d921fc35ae37cf2b1628e3" Workload="localhost-k8s-coredns--674b8bbfcf--8w5cr-eth0" Mar 6 01:36:48.760499 containerd[1459]: 2026-03-06 01:36:48.736 [INFO][3918] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:36:48.760499 containerd[1459]: 2026-03-06 01:36:48.745 [INFO][3824] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7b778730280a4e6a4013c014a3f3a55c5aea4fbfc9d921fc35ae37cf2b1628e3" Mar 6 01:36:48.763202 containerd[1459]: time="2026-03-06T01:36:48.763046182Z" level=info msg="TearDown network for sandbox \"7b778730280a4e6a4013c014a3f3a55c5aea4fbfc9d921fc35ae37cf2b1628e3\" successfully" Mar 6 01:36:48.763202 containerd[1459]: time="2026-03-06T01:36:48.763078381Z" level=info msg="StopPodSandbox for \"7b778730280a4e6a4013c014a3f3a55c5aea4fbfc9d921fc35ae37cf2b1628e3\" returns successfully" Mar 6 01:36:48.763748 kubelet[2549]: E0306 01:36:48.763689 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:36:48.765187 containerd[1459]: time="2026-03-06T01:36:48.765162292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8w5cr,Uid:259cb996-fbdd-4a81-b770-165dc5d9d831,Namespace:kube-system,Attempt:1,}" Mar 6 01:36:48.784544 containerd[1459]: 2026-03-06 01:36:48.512 [INFO][3846] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="359ff3a5f6de42e9f4238ee55feeefe2f836f1d92e0fe0163218c8fccdc4b0d6" Mar 6 01:36:48.784544 containerd[1459]: 2026-03-06 01:36:48.520 [INFO][3846] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="359ff3a5f6de42e9f4238ee55feeefe2f836f1d92e0fe0163218c8fccdc4b0d6" iface="eth0" netns="/var/run/netns/cni-f867227c-063f-3494-461f-74befb779f44" Mar 6 01:36:48.784544 containerd[1459]: 2026-03-06 01:36:48.520 [INFO][3846] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="359ff3a5f6de42e9f4238ee55feeefe2f836f1d92e0fe0163218c8fccdc4b0d6" iface="eth0" netns="/var/run/netns/cni-f867227c-063f-3494-461f-74befb779f44" Mar 6 01:36:48.784544 containerd[1459]: 2026-03-06 01:36:48.528 [INFO][3846] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="359ff3a5f6de42e9f4238ee55feeefe2f836f1d92e0fe0163218c8fccdc4b0d6" iface="eth0" netns="/var/run/netns/cni-f867227c-063f-3494-461f-74befb779f44" Mar 6 01:36:48.784544 containerd[1459]: 2026-03-06 01:36:48.529 [INFO][3846] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="359ff3a5f6de42e9f4238ee55feeefe2f836f1d92e0fe0163218c8fccdc4b0d6" Mar 6 01:36:48.784544 containerd[1459]: 2026-03-06 01:36:48.529 [INFO][3846] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="359ff3a5f6de42e9f4238ee55feeefe2f836f1d92e0fe0163218c8fccdc4b0d6" Mar 6 01:36:48.784544 containerd[1459]: 2026-03-06 01:36:48.726 [INFO][3930] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="359ff3a5f6de42e9f4238ee55feeefe2f836f1d92e0fe0163218c8fccdc4b0d6" HandleID="k8s-pod-network.359ff3a5f6de42e9f4238ee55feeefe2f836f1d92e0fe0163218c8fccdc4b0d6" Workload="localhost-k8s-coredns--674b8bbfcf--vdjph-eth0" Mar 6 01:36:48.784544 containerd[1459]: 2026-03-06 01:36:48.727 [INFO][3930] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:36:48.784544 containerd[1459]: 2026-03-06 01:36:48.736 [INFO][3930] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:36:48.784544 containerd[1459]: 2026-03-06 01:36:48.759 [WARNING][3930] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="359ff3a5f6de42e9f4238ee55feeefe2f836f1d92e0fe0163218c8fccdc4b0d6" HandleID="k8s-pod-network.359ff3a5f6de42e9f4238ee55feeefe2f836f1d92e0fe0163218c8fccdc4b0d6" Workload="localhost-k8s-coredns--674b8bbfcf--vdjph-eth0" Mar 6 01:36:48.784544 containerd[1459]: 2026-03-06 01:36:48.759 [INFO][3930] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="359ff3a5f6de42e9f4238ee55feeefe2f836f1d92e0fe0163218c8fccdc4b0d6" HandleID="k8s-pod-network.359ff3a5f6de42e9f4238ee55feeefe2f836f1d92e0fe0163218c8fccdc4b0d6" Workload="localhost-k8s-coredns--674b8bbfcf--vdjph-eth0" Mar 6 01:36:48.784544 containerd[1459]: 2026-03-06 01:36:48.763 [INFO][3930] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:36:48.784544 containerd[1459]: 2026-03-06 01:36:48.779 [INFO][3846] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="359ff3a5f6de42e9f4238ee55feeefe2f836f1d92e0fe0163218c8fccdc4b0d6" Mar 6 01:36:48.785493 containerd[1459]: time="2026-03-06T01:36:48.785235157Z" level=info msg="TearDown network for sandbox \"359ff3a5f6de42e9f4238ee55feeefe2f836f1d92e0fe0163218c8fccdc4b0d6\" successfully" Mar 6 01:36:48.785493 containerd[1459]: time="2026-03-06T01:36:48.785263510Z" level=info msg="StopPodSandbox for \"359ff3a5f6de42e9f4238ee55feeefe2f836f1d92e0fe0163218c8fccdc4b0d6\" returns successfully" Mar 6 01:36:48.785821 kubelet[2549]: E0306 01:36:48.785758 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:36:48.788405 containerd[1459]: time="2026-03-06T01:36:48.788378253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vdjph,Uid:8a67ece0-9c61-4759-9222-15c2c383bab1,Namespace:kube-system,Attempt:1,}" Mar 6 01:36:48.868973 kubelet[2549]: I0306 01:36:48.865178 2549 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/50410f0e-4b03-4463-ad7c-49d16c007f3a-nginx-config\") pod \"50410f0e-4b03-4463-ad7c-49d16c007f3a\" (UID: \"50410f0e-4b03-4463-ad7c-49d16c007f3a\") " Mar 6 01:36:48.868973 kubelet[2549]: I0306 01:36:48.865273 2549 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50410f0e-4b03-4463-ad7c-49d16c007f3a-whisker-ca-bundle\") pod \"50410f0e-4b03-4463-ad7c-49d16c007f3a\" (UID: \"50410f0e-4b03-4463-ad7c-49d16c007f3a\") " Mar 6 01:36:48.868973 kubelet[2549]: I0306 01:36:48.867044 2549 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50410f0e-4b03-4463-ad7c-49d16c007f3a-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "50410f0e-4b03-4463-ad7c-49d16c007f3a" (UID: "50410f0e-4b03-4463-ad7c-49d16c007f3a"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 6 01:36:48.868973 kubelet[2549]: I0306 01:36:48.867129 2549 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6qg6\" (UniqueName: \"kubernetes.io/projected/50410f0e-4b03-4463-ad7c-49d16c007f3a-kube-api-access-m6qg6\") pod \"50410f0e-4b03-4463-ad7c-49d16c007f3a\" (UID: \"50410f0e-4b03-4463-ad7c-49d16c007f3a\") " Mar 6 01:36:48.868973 kubelet[2549]: I0306 01:36:48.867177 2549 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/50410f0e-4b03-4463-ad7c-49d16c007f3a-whisker-backend-key-pair\") pod \"50410f0e-4b03-4463-ad7c-49d16c007f3a\" (UID: \"50410f0e-4b03-4463-ad7c-49d16c007f3a\") " Mar 6 01:36:48.868973 kubelet[2549]: I0306 01:36:48.867304 2549 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/50410f0e-4b03-4463-ad7c-49d16c007f3a-nginx-config\") on node \"localhost\" DevicePath \"\"" Mar 6 01:36:48.869388 kubelet[2549]: I0306 01:36:48.867720 2549 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50410f0e-4b03-4463-ad7c-49d16c007f3a-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "50410f0e-4b03-4463-ad7c-49d16c007f3a" (UID: "50410f0e-4b03-4463-ad7c-49d16c007f3a"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 6 01:36:48.875617 kubelet[2549]: I0306 01:36:48.875490 2549 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50410f0e-4b03-4463-ad7c-49d16c007f3a-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "50410f0e-4b03-4463-ad7c-49d16c007f3a" (UID: "50410f0e-4b03-4463-ad7c-49d16c007f3a"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 6 01:36:48.877352 kubelet[2549]: I0306 01:36:48.877131 2549 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50410f0e-4b03-4463-ad7c-49d16c007f3a-kube-api-access-m6qg6" (OuterVolumeSpecName: "kube-api-access-m6qg6") pod "50410f0e-4b03-4463-ad7c-49d16c007f3a" (UID: "50410f0e-4b03-4463-ad7c-49d16c007f3a"). InnerVolumeSpecName "kube-api-access-m6qg6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 6 01:36:48.891713 systemd[1]: run-netns-cni\x2df867227c\x2d063f\x2d3494\x2d461f\x2d74befb779f44.mount: Deactivated successfully. Mar 6 01:36:48.891873 systemd[1]: run-netns-cni\x2d24ed05ce\x2d7613\x2d1891\x2ded8f\x2d9e066501e8a0.mount: Deactivated successfully. Mar 6 01:36:48.892055 systemd[1]: run-netns-cni\x2d196be776\x2d3934\x2dd036\x2d54e9\x2d29d194882468.mount: Deactivated successfully. Mar 6 01:36:48.892171 systemd[1]: run-netns-cni\x2d81f11f19\x2d987d\x2dd229\x2d4915\x2daf60da1e6a49.mount: Deactivated successfully. Mar 6 01:36:48.892287 systemd[1]: run-netns-cni\x2df29bb99b\x2db586\x2d15c2\x2db20d\x2d6c5d6e363562.mount: Deactivated successfully. Mar 6 01:36:48.892455 systemd[1]: var-lib-kubelet-pods-50410f0e\x2d4b03\x2d4463\x2dad7c\x2d49d16c007f3a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dm6qg6.mount: Deactivated successfully. Mar 6 01:36:48.893389 systemd[1]: var-lib-kubelet-pods-50410f0e\x2d4b03\x2d4463\x2dad7c\x2d49d16c007f3a-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 6 01:36:48.893530 systemd[1]: run-netns-cni\x2d71a43ebe\x2df6dc\x2da03c\x2d5ea0\x2d33a35a1edc82.mount: Deactivated successfully. Mar 6 01:36:48.967851 kubelet[2549]: I0306 01:36:48.967809 2549 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/50410f0e-4b03-4463-ad7c-49d16c007f3a-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Mar 6 01:36:48.968120 kubelet[2549]: I0306 01:36:48.968104 2549 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50410f0e-4b03-4463-ad7c-49d16c007f3a-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Mar 6 01:36:48.968214 kubelet[2549]: I0306 01:36:48.968201 2549 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m6qg6\" (UniqueName: \"kubernetes.io/projected/50410f0e-4b03-4463-ad7c-49d16c007f3a-kube-api-access-m6qg6\") on node \"localhost\" DevicePath \"\"" Mar 6 01:36:49.010514 systemd[1]: Removed slice kubepods-besteffort-pod50410f0e_4b03_4463_ad7c_49d16c007f3a.slice - libcontainer container kubepods-besteffort-pod50410f0e_4b03_4463_ad7c_49d16c007f3a.slice. Mar 6 01:36:49.021947 systemd-networkd[1392]: calic34a6eb9234: Link UP Mar 6 01:36:49.024055 systemd-networkd[1392]: calic34a6eb9234: Gained carrier Mar 6 01:36:49.096113 containerd[1459]: 2026-03-06 01:36:48.724 [ERROR][3955] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 6 01:36:49.096113 containerd[1459]: 2026-03-06 01:36:48.763 [INFO][3955] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6cfdc76bfd--kz2gs-eth0 calico-apiserver-6cfdc76bfd- calico-system 1287fe69-fec1-4536-8130-77a9ee5d6e26 966 0 2026-03-06 01:36:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6cfdc76bfd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6cfdc76bfd-kz2gs eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calic34a6eb9234 [] [] }} ContainerID="9f7f8919af3fb117a3ee7525c8161387c15e6d2874dcd310c97f6073afff886f" Namespace="calico-system" Pod="calico-apiserver-6cfdc76bfd-kz2gs" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cfdc76bfd--kz2gs-" Mar 6 01:36:49.096113 containerd[1459]: 2026-03-06 01:36:48.763 [INFO][3955] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9f7f8919af3fb117a3ee7525c8161387c15e6d2874dcd310c97f6073afff886f" Namespace="calico-system" Pod="calico-apiserver-6cfdc76bfd-kz2gs" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cfdc76bfd--kz2gs-eth0" Mar 6 01:36:49.096113 containerd[1459]: 2026-03-06 01:36:48.829 [INFO][4012] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9f7f8919af3fb117a3ee7525c8161387c15e6d2874dcd310c97f6073afff886f" HandleID="k8s-pod-network.9f7f8919af3fb117a3ee7525c8161387c15e6d2874dcd310c97f6073afff886f" Workload="localhost-k8s-calico--apiserver--6cfdc76bfd--kz2gs-eth0" Mar 6 01:36:49.096113 containerd[1459]: 2026-03-06 01:36:48.883 [INFO][4012] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9f7f8919af3fb117a3ee7525c8161387c15e6d2874dcd310c97f6073afff886f" HandleID="k8s-pod-network.9f7f8919af3fb117a3ee7525c8161387c15e6d2874dcd310c97f6073afff886f" Workload="localhost-k8s-calico--apiserver--6cfdc76bfd--kz2gs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00040a070), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-6cfdc76bfd-kz2gs", "timestamp":"2026-03-06 01:36:48.829495187 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000198000)} Mar 6 01:36:49.096113 containerd[1459]: 2026-03-06 01:36:48.883 [INFO][4012] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:36:49.096113 containerd[1459]: 2026-03-06 01:36:48.883 [INFO][4012] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:36:49.096113 containerd[1459]: 2026-03-06 01:36:48.883 [INFO][4012] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 6 01:36:49.096113 containerd[1459]: 2026-03-06 01:36:48.897 [INFO][4012] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9f7f8919af3fb117a3ee7525c8161387c15e6d2874dcd310c97f6073afff886f" host="localhost" Mar 6 01:36:49.096113 containerd[1459]: 2026-03-06 01:36:48.907 [INFO][4012] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 6 01:36:49.096113 containerd[1459]: 2026-03-06 01:36:48.917 [INFO][4012] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 6 01:36:49.096113 containerd[1459]: 2026-03-06 01:36:48.920 [INFO][4012] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 6 01:36:49.096113 containerd[1459]: 2026-03-06 01:36:48.924 [INFO][4012] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 6 01:36:49.096113 containerd[1459]: 2026-03-06 01:36:48.924 [INFO][4012] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9f7f8919af3fb117a3ee7525c8161387c15e6d2874dcd310c97f6073afff886f" host="localhost" Mar 6 01:36:49.096113 containerd[1459]: 2026-03-06 01:36:48.928 [INFO][4012] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9f7f8919af3fb117a3ee7525c8161387c15e6d2874dcd310c97f6073afff886f Mar 6 01:36:49.096113 containerd[1459]: 2026-03-06 01:36:48.936 [INFO][4012] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9f7f8919af3fb117a3ee7525c8161387c15e6d2874dcd310c97f6073afff886f" host="localhost" Mar 6 01:36:49.096113 containerd[1459]: 2026-03-06 01:36:48.950 [INFO][4012] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.9f7f8919af3fb117a3ee7525c8161387c15e6d2874dcd310c97f6073afff886f" host="localhost" Mar 6 01:36:49.096113 containerd[1459]: 2026-03-06 01:36:48.950 [INFO][4012] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.9f7f8919af3fb117a3ee7525c8161387c15e6d2874dcd310c97f6073afff886f" host="localhost" Mar 6 01:36:49.096113 containerd[1459]: 2026-03-06 01:36:48.950 [INFO][4012] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:36:49.096113 containerd[1459]: 2026-03-06 01:36:48.950 [INFO][4012] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="9f7f8919af3fb117a3ee7525c8161387c15e6d2874dcd310c97f6073afff886f" HandleID="k8s-pod-network.9f7f8919af3fb117a3ee7525c8161387c15e6d2874dcd310c97f6073afff886f" Workload="localhost-k8s-calico--apiserver--6cfdc76bfd--kz2gs-eth0" Mar 6 01:36:49.096974 containerd[1459]: 2026-03-06 01:36:48.964 [INFO][3955] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9f7f8919af3fb117a3ee7525c8161387c15e6d2874dcd310c97f6073afff886f" Namespace="calico-system" Pod="calico-apiserver-6cfdc76bfd-kz2gs" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cfdc76bfd--kz2gs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6cfdc76bfd--kz2gs-eth0", GenerateName:"calico-apiserver-6cfdc76bfd-", Namespace:"calico-system", SelfLink:"", UID:"1287fe69-fec1-4536-8130-77a9ee5d6e26", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 36, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cfdc76bfd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6cfdc76bfd-kz2gs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calic34a6eb9234", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:36:49.096974 containerd[1459]: 2026-03-06 01:36:48.964 [INFO][3955] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="9f7f8919af3fb117a3ee7525c8161387c15e6d2874dcd310c97f6073afff886f" Namespace="calico-system" Pod="calico-apiserver-6cfdc76bfd-kz2gs" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cfdc76bfd--kz2gs-eth0" Mar 6 01:36:49.096974 containerd[1459]: 2026-03-06 01:36:48.964 [INFO][3955] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic34a6eb9234 ContainerID="9f7f8919af3fb117a3ee7525c8161387c15e6d2874dcd310c97f6073afff886f" Namespace="calico-system" Pod="calico-apiserver-6cfdc76bfd-kz2gs" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cfdc76bfd--kz2gs-eth0" Mar 6 01:36:49.096974 containerd[1459]: 2026-03-06 01:36:49.031 [INFO][3955] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9f7f8919af3fb117a3ee7525c8161387c15e6d2874dcd310c97f6073afff886f" Namespace="calico-system" Pod="calico-apiserver-6cfdc76bfd-kz2gs" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cfdc76bfd--kz2gs-eth0" Mar 6 01:36:49.096974 containerd[1459]: 2026-03-06 01:36:49.048 [INFO][3955] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9f7f8919af3fb117a3ee7525c8161387c15e6d2874dcd310c97f6073afff886f" Namespace="calico-system" Pod="calico-apiserver-6cfdc76bfd-kz2gs" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cfdc76bfd--kz2gs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6cfdc76bfd--kz2gs-eth0", GenerateName:"calico-apiserver-6cfdc76bfd-", Namespace:"calico-system", SelfLink:"", UID:"1287fe69-fec1-4536-8130-77a9ee5d6e26", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 36, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cfdc76bfd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9f7f8919af3fb117a3ee7525c8161387c15e6d2874dcd310c97f6073afff886f", Pod:"calico-apiserver-6cfdc76bfd-kz2gs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calic34a6eb9234", MAC:"d6:7f:86:60:28:d3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:36:49.096974 containerd[1459]: 2026-03-06 01:36:49.087 [INFO][3955] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9f7f8919af3fb117a3ee7525c8161387c15e6d2874dcd310c97f6073afff886f" Namespace="calico-system" Pod="calico-apiserver-6cfdc76bfd-kz2gs" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cfdc76bfd--kz2gs-eth0" Mar 6 01:36:49.116799 systemd-networkd[1392]: caliac1c911f2c8: Link UP Mar 6 01:36:49.117464 systemd-networkd[1392]: caliac1c911f2c8: Gained carrier Mar 6 01:36:49.162291 kubelet[2549]: I0306 01:36:49.161876 2549 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 6 01:36:49.162486 kubelet[2549]: E0306 01:36:49.162376 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:36:49.177204 containerd[1459]: 2026-03-06 01:36:48.794 [ERROR][3986] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 6 01:36:49.177204 containerd[1459]: 2026-03-06 01:36:48.827 [INFO][3986] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--kfs88-eth0 csi-node-driver- calico-system 885266c2-0ca6-482d-827c-cc1c88e284cf 969 0 2026-03-06 01:36:26 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-kfs88 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliac1c911f2c8 [] [] }} ContainerID="7acf0a7a3264dbfc836b446d2324c24e56afb9ef331ad621f7320774565d01e8" Namespace="calico-system" Pod="csi-node-driver-kfs88" WorkloadEndpoint="localhost-k8s-csi--node--driver--kfs88-" Mar 6 01:36:49.177204 containerd[1459]: 2026-03-06 01:36:48.827 [INFO][3986] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7acf0a7a3264dbfc836b446d2324c24e56afb9ef331ad621f7320774565d01e8" Namespace="calico-system" Pod="csi-node-driver-kfs88" WorkloadEndpoint="localhost-k8s-csi--node--driver--kfs88-eth0" Mar 6 01:36:49.177204 containerd[1459]: 2026-03-06 01:36:48.988 [INFO][4064] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7acf0a7a3264dbfc836b446d2324c24e56afb9ef331ad621f7320774565d01e8" HandleID="k8s-pod-network.7acf0a7a3264dbfc836b446d2324c24e56afb9ef331ad621f7320774565d01e8" Workload="localhost-k8s-csi--node--driver--kfs88-eth0" Mar 6 01:36:49.177204 containerd[1459]: 2026-03-06 01:36:49.001 [INFO][4064] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7acf0a7a3264dbfc836b446d2324c24e56afb9ef331ad621f7320774565d01e8" HandleID="k8s-pod-network.7acf0a7a3264dbfc836b446d2324c24e56afb9ef331ad621f7320774565d01e8" Workload="localhost-k8s-csi--node--driver--kfs88-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000408120), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-kfs88", "timestamp":"2026-03-06 01:36:48.988504597 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002494a0)} Mar 6 01:36:49.177204 containerd[1459]: 2026-03-06 01:36:49.001 [INFO][4064] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:36:49.177204 containerd[1459]: 2026-03-06 01:36:49.001 [INFO][4064] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:36:49.177204 containerd[1459]: 2026-03-06 01:36:49.001 [INFO][4064] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 6 01:36:49.177204 containerd[1459]: 2026-03-06 01:36:49.005 [INFO][4064] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7acf0a7a3264dbfc836b446d2324c24e56afb9ef331ad621f7320774565d01e8" host="localhost" Mar 6 01:36:49.177204 containerd[1459]: 2026-03-06 01:36:49.016 [INFO][4064] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 6 01:36:49.177204 containerd[1459]: 2026-03-06 01:36:49.029 [INFO][4064] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 6 01:36:49.177204 containerd[1459]: 2026-03-06 01:36:49.034 [INFO][4064] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 6 01:36:49.177204 containerd[1459]: 2026-03-06 01:36:49.040 [INFO][4064] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 6 01:36:49.177204 containerd[1459]: 2026-03-06 01:36:49.041 [INFO][4064] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7acf0a7a3264dbfc836b446d2324c24e56afb9ef331ad621f7320774565d01e8" host="localhost" Mar 6 01:36:49.177204 containerd[1459]: 2026-03-06 01:36:49.052 [INFO][4064] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7acf0a7a3264dbfc836b446d2324c24e56afb9ef331ad621f7320774565d01e8 Mar 6 01:36:49.177204 containerd[1459]: 2026-03-06 01:36:49.080 [INFO][4064] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7acf0a7a3264dbfc836b446d2324c24e56afb9ef331ad621f7320774565d01e8" host="localhost" Mar 6 01:36:49.177204 containerd[1459]: 2026-03-06 01:36:49.097 [INFO][4064] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.7acf0a7a3264dbfc836b446d2324c24e56afb9ef331ad621f7320774565d01e8" host="localhost" Mar 6 01:36:49.177204 containerd[1459]: 2026-03-06 01:36:49.097 [INFO][4064] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.7acf0a7a3264dbfc836b446d2324c24e56afb9ef331ad621f7320774565d01e8" host="localhost" Mar 6 01:36:49.177204 containerd[1459]: 2026-03-06 01:36:49.097 [INFO][4064] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:36:49.177204 containerd[1459]: 2026-03-06 01:36:49.097 [INFO][4064] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="7acf0a7a3264dbfc836b446d2324c24e56afb9ef331ad621f7320774565d01e8" HandleID="k8s-pod-network.7acf0a7a3264dbfc836b446d2324c24e56afb9ef331ad621f7320774565d01e8" Workload="localhost-k8s-csi--node--driver--kfs88-eth0" Mar 6 01:36:49.178458 containerd[1459]: 2026-03-06 01:36:49.105 [INFO][3986] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7acf0a7a3264dbfc836b446d2324c24e56afb9ef331ad621f7320774565d01e8" Namespace="calico-system" Pod="csi-node-driver-kfs88" WorkloadEndpoint="localhost-k8s-csi--node--driver--kfs88-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--kfs88-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"885266c2-0ca6-482d-827c-cc1c88e284cf", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 36, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-kfs88", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliac1c911f2c8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:36:49.178458 containerd[1459]: 2026-03-06 01:36:49.106 [INFO][3986] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="7acf0a7a3264dbfc836b446d2324c24e56afb9ef331ad621f7320774565d01e8" Namespace="calico-system" Pod="csi-node-driver-kfs88" WorkloadEndpoint="localhost-k8s-csi--node--driver--kfs88-eth0" Mar 6 01:36:49.178458 containerd[1459]: 2026-03-06 01:36:49.106 [INFO][3986] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliac1c911f2c8 ContainerID="7acf0a7a3264dbfc836b446d2324c24e56afb9ef331ad621f7320774565d01e8" Namespace="calico-system" Pod="csi-node-driver-kfs88" WorkloadEndpoint="localhost-k8s-csi--node--driver--kfs88-eth0" Mar 6 01:36:49.178458 containerd[1459]: 2026-03-06 01:36:49.120 [INFO][3986] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7acf0a7a3264dbfc836b446d2324c24e56afb9ef331ad621f7320774565d01e8" Namespace="calico-system" Pod="csi-node-driver-kfs88" WorkloadEndpoint="localhost-k8s-csi--node--driver--kfs88-eth0" Mar 6 01:36:49.178458 containerd[1459]: 2026-03-06 01:36:49.122 [INFO][3986] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7acf0a7a3264dbfc836b446d2324c24e56afb9ef331ad621f7320774565d01e8" Namespace="calico-system" Pod="csi-node-driver-kfs88" WorkloadEndpoint="localhost-k8s-csi--node--driver--kfs88-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--kfs88-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"885266c2-0ca6-482d-827c-cc1c88e284cf", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 36, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7acf0a7a3264dbfc836b446d2324c24e56afb9ef331ad621f7320774565d01e8", Pod:"csi-node-driver-kfs88", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliac1c911f2c8", MAC:"4e:3d:27:69:a5:90", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:36:49.178458 containerd[1459]: 2026-03-06 01:36:49.158 [INFO][3986] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7acf0a7a3264dbfc836b446d2324c24e56afb9ef331ad621f7320774565d01e8" Namespace="calico-system" Pod="csi-node-driver-kfs88" WorkloadEndpoint="localhost-k8s-csi--node--driver--kfs88-eth0" Mar 6 01:36:49.282022 kubelet[2549]: E0306 01:36:49.279581 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:36:49.287062 containerd[1459]: time="2026-03-06T01:36:49.282706260Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:36:49.287062 containerd[1459]: time="2026-03-06T01:36:49.282790017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:36:49.287062 containerd[1459]: time="2026-03-06T01:36:49.282809223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:36:49.287062 containerd[1459]: time="2026-03-06T01:36:49.283107442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:36:49.318294 systemd-networkd[1392]: calif7a7c6c0897: Link UP Mar 6 01:36:49.321615 systemd-networkd[1392]: calif7a7c6c0897: Gained carrier Mar 6 01:36:49.400775 containerd[1459]: time="2026-03-06T01:36:49.400298538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:36:49.400775 containerd[1459]: time="2026-03-06T01:36:49.400420067Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:36:49.400775 containerd[1459]: time="2026-03-06T01:36:49.400435164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:36:49.400775 containerd[1459]: time="2026-03-06T01:36:49.400521376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:36:49.431292 containerd[1459]: 2026-03-06 01:36:48.799 [ERROR][3969] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 6 01:36:49.431292 containerd[1459]: 2026-03-06 01:36:48.869 [INFO][3969] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6cfdc76bfd--tpll8-eth0 calico-apiserver-6cfdc76bfd- calico-system cd4720ad-a0f0-4e45-919b-1e9f495dfc32 968 0 2026-03-06 01:36:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6cfdc76bfd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6cfdc76bfd-tpll8 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calif7a7c6c0897 [] [] }} ContainerID="944f79b100d18e17fcaed6a23ec72873665544c32f87bf2ba7113611e55d8f2c" Namespace="calico-system" Pod="calico-apiserver-6cfdc76bfd-tpll8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cfdc76bfd--tpll8-" Mar 6 01:36:49.431292 containerd[1459]: 2026-03-06 01:36:48.871 [INFO][3969] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="944f79b100d18e17fcaed6a23ec72873665544c32f87bf2ba7113611e55d8f2c" Namespace="calico-system" Pod="calico-apiserver-6cfdc76bfd-tpll8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cfdc76bfd--tpll8-eth0" Mar 6 01:36:49.431292 containerd[1459]: 2026-03-06 01:36:49.050 [INFO][4071] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="944f79b100d18e17fcaed6a23ec72873665544c32f87bf2ba7113611e55d8f2c" HandleID="k8s-pod-network.944f79b100d18e17fcaed6a23ec72873665544c32f87bf2ba7113611e55d8f2c" Workload="localhost-k8s-calico--apiserver--6cfdc76bfd--tpll8-eth0" Mar 6 01:36:49.431292 containerd[1459]: 2026-03-06 01:36:49.083 [INFO][4071] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="944f79b100d18e17fcaed6a23ec72873665544c32f87bf2ba7113611e55d8f2c" HandleID="k8s-pod-network.944f79b100d18e17fcaed6a23ec72873665544c32f87bf2ba7113611e55d8f2c" Workload="localhost-k8s-calico--apiserver--6cfdc76bfd--tpll8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f190), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-6cfdc76bfd-tpll8", "timestamp":"2026-03-06 01:36:49.050821266 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002062c0)} Mar 6 01:36:49.431292 containerd[1459]: 2026-03-06 01:36:49.085 [INFO][4071] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:36:49.431292 containerd[1459]: 2026-03-06 01:36:49.097 [INFO][4071] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:36:49.431292 containerd[1459]: 2026-03-06 01:36:49.098 [INFO][4071] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 6 01:36:49.431292 containerd[1459]: 2026-03-06 01:36:49.109 [INFO][4071] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.944f79b100d18e17fcaed6a23ec72873665544c32f87bf2ba7113611e55d8f2c" host="localhost" Mar 6 01:36:49.431292 containerd[1459]: 2026-03-06 01:36:49.118 [INFO][4071] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 6 01:36:49.431292 containerd[1459]: 2026-03-06 01:36:49.132 [INFO][4071] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 6 01:36:49.431292 containerd[1459]: 2026-03-06 01:36:49.157 [INFO][4071] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 6 01:36:49.431292 containerd[1459]: 2026-03-06 01:36:49.181 [INFO][4071] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 6 01:36:49.431292 containerd[1459]: 2026-03-06 01:36:49.181 [INFO][4071] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.944f79b100d18e17fcaed6a23ec72873665544c32f87bf2ba7113611e55d8f2c" host="localhost" Mar 6 01:36:49.431292 containerd[1459]: 2026-03-06 01:36:49.195 [INFO][4071] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.944f79b100d18e17fcaed6a23ec72873665544c32f87bf2ba7113611e55d8f2c Mar 6 01:36:49.431292 containerd[1459]: 2026-03-06 01:36:49.205 [INFO][4071] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.944f79b100d18e17fcaed6a23ec72873665544c32f87bf2ba7113611e55d8f2c" host="localhost" Mar 6 01:36:49.431292 containerd[1459]: 2026-03-06 01:36:49.220 [INFO][4071] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.944f79b100d18e17fcaed6a23ec72873665544c32f87bf2ba7113611e55d8f2c" host="localhost" Mar 6 01:36:49.431292 containerd[1459]: 2026-03-06 01:36:49.220 [INFO][4071] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.944f79b100d18e17fcaed6a23ec72873665544c32f87bf2ba7113611e55d8f2c" host="localhost" Mar 6 01:36:49.431292 containerd[1459]: 2026-03-06 01:36:49.220 [INFO][4071] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:36:49.431292 containerd[1459]: 2026-03-06 01:36:49.220 [INFO][4071] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="944f79b100d18e17fcaed6a23ec72873665544c32f87bf2ba7113611e55d8f2c" HandleID="k8s-pod-network.944f79b100d18e17fcaed6a23ec72873665544c32f87bf2ba7113611e55d8f2c" Workload="localhost-k8s-calico--apiserver--6cfdc76bfd--tpll8-eth0" Mar 6 01:36:49.433458 containerd[1459]: 2026-03-06 01:36:49.266 [INFO][3969] cni-plugin/k8s.go 418: Populated endpoint ContainerID="944f79b100d18e17fcaed6a23ec72873665544c32f87bf2ba7113611e55d8f2c" Namespace="calico-system" Pod="calico-apiserver-6cfdc76bfd-tpll8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cfdc76bfd--tpll8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6cfdc76bfd--tpll8-eth0", GenerateName:"calico-apiserver-6cfdc76bfd-", Namespace:"calico-system", SelfLink:"", UID:"cd4720ad-a0f0-4e45-919b-1e9f495dfc32", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 36, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cfdc76bfd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6cfdc76bfd-tpll8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif7a7c6c0897", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:36:49.433458 containerd[1459]: 2026-03-06 01:36:49.266 [INFO][3969] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="944f79b100d18e17fcaed6a23ec72873665544c32f87bf2ba7113611e55d8f2c" Namespace="calico-system" Pod="calico-apiserver-6cfdc76bfd-tpll8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cfdc76bfd--tpll8-eth0" Mar 6 01:36:49.433458 containerd[1459]: 2026-03-06 01:36:49.266 [INFO][3969] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif7a7c6c0897 ContainerID="944f79b100d18e17fcaed6a23ec72873665544c32f87bf2ba7113611e55d8f2c" Namespace="calico-system" Pod="calico-apiserver-6cfdc76bfd-tpll8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cfdc76bfd--tpll8-eth0" Mar 6 01:36:49.433458 containerd[1459]: 2026-03-06 01:36:49.331 [INFO][3969] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="944f79b100d18e17fcaed6a23ec72873665544c32f87bf2ba7113611e55d8f2c" Namespace="calico-system" Pod="calico-apiserver-6cfdc76bfd-tpll8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cfdc76bfd--tpll8-eth0" Mar 6 01:36:49.433458 containerd[1459]: 2026-03-06 01:36:49.334 [INFO][3969] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="944f79b100d18e17fcaed6a23ec72873665544c32f87bf2ba7113611e55d8f2c" Namespace="calico-system" Pod="calico-apiserver-6cfdc76bfd-tpll8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cfdc76bfd--tpll8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6cfdc76bfd--tpll8-eth0", GenerateName:"calico-apiserver-6cfdc76bfd-", Namespace:"calico-system", SelfLink:"", UID:"cd4720ad-a0f0-4e45-919b-1e9f495dfc32", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 36, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cfdc76bfd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"944f79b100d18e17fcaed6a23ec72873665544c32f87bf2ba7113611e55d8f2c", Pod:"calico-apiserver-6cfdc76bfd-tpll8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif7a7c6c0897", MAC:"82:6d:9c:72:c1:5b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:36:49.433458 containerd[1459]: 2026-03-06 01:36:49.398 [INFO][3969] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="944f79b100d18e17fcaed6a23ec72873665544c32f87bf2ba7113611e55d8f2c" Namespace="calico-system" Pod="calico-apiserver-6cfdc76bfd-tpll8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cfdc76bfd--tpll8-eth0" Mar 6 01:36:49.439174 systemd[1]: Started cri-containerd-9f7f8919af3fb117a3ee7525c8161387c15e6d2874dcd310c97f6073afff886f.scope - libcontainer container 9f7f8919af3fb117a3ee7525c8161387c15e6d2874dcd310c97f6073afff886f. Mar 6 01:36:49.521777 systemd[1]: Created slice kubepods-besteffort-pod956351bc_0070_433f_97da_835f4c03af41.slice - libcontainer container kubepods-besteffort-pod956351bc_0070_433f_97da_835f4c03af41.slice. Mar 6 01:36:49.546524 systemd-networkd[1392]: cali8a032a72858: Link UP Mar 6 01:36:49.552416 systemd-networkd[1392]: cali8a032a72858: Gained carrier Mar 6 01:36:49.585120 kubelet[2549]: I0306 01:36:49.584258 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/956351bc-0070-433f-97da-835f4c03af41-nginx-config\") pod \"whisker-5858797979-6kp78\" (UID: \"956351bc-0070-433f-97da-835f4c03af41\") " pod="calico-system/whisker-5858797979-6kp78" Mar 6 01:36:49.585120 kubelet[2549]: I0306 01:36:49.584433 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49vtk\" (UniqueName: \"kubernetes.io/projected/956351bc-0070-433f-97da-835f4c03af41-kube-api-access-49vtk\") pod \"whisker-5858797979-6kp78\" (UID: \"956351bc-0070-433f-97da-835f4c03af41\") " pod="calico-system/whisker-5858797979-6kp78" Mar 6 01:36:49.585120 kubelet[2549]: I0306 01:36:49.584694 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/956351bc-0070-433f-97da-835f4c03af41-whisker-backend-key-pair\") pod \"whisker-5858797979-6kp78\" (UID: \"956351bc-0070-433f-97da-835f4c03af41\") " pod="calico-system/whisker-5858797979-6kp78" Mar 6 01:36:49.585662 kubelet[2549]: I0306 01:36:49.585527 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/956351bc-0070-433f-97da-835f4c03af41-whisker-ca-bundle\") pod \"whisker-5858797979-6kp78\" (UID: \"956351bc-0070-433f-97da-835f4c03af41\") " pod="calico-system/whisker-5858797979-6kp78" Mar 6 01:36:49.600258 containerd[1459]: 2026-03-06 01:36:48.923 [ERROR][3993] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 6 01:36:49.600258 containerd[1459]: 2026-03-06 01:36:48.954 [INFO][3993] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--85b54d65f4--pdsbh-eth0 calico-kube-controllers-85b54d65f4- calico-system 69b227be-3957-4eec-9624-244977470ca6 967 0 2026-03-06 01:36:26 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:85b54d65f4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-85b54d65f4-pdsbh eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8a032a72858 [] [] }} ContainerID="72714953d14d398451d80bc4d52732d53887e907f90fe629a9e98f538c527a2b" Namespace="calico-system" Pod="calico-kube-controllers-85b54d65f4-pdsbh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85b54d65f4--pdsbh-" Mar 6 01:36:49.600258 containerd[1459]: 2026-03-06 01:36:48.954 [INFO][3993] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="72714953d14d398451d80bc4d52732d53887e907f90fe629a9e98f538c527a2b" Namespace="calico-system" Pod="calico-kube-controllers-85b54d65f4-pdsbh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85b54d65f4--pdsbh-eth0" Mar 6 01:36:49.600258 containerd[1459]: 2026-03-06 01:36:49.154 [INFO][4080] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="72714953d14d398451d80bc4d52732d53887e907f90fe629a9e98f538c527a2b" HandleID="k8s-pod-network.72714953d14d398451d80bc4d52732d53887e907f90fe629a9e98f538c527a2b" Workload="localhost-k8s-calico--kube--controllers--85b54d65f4--pdsbh-eth0" Mar 6 01:36:49.600258 containerd[1459]: 2026-03-06 01:36:49.199 [INFO][4080] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="72714953d14d398451d80bc4d52732d53887e907f90fe629a9e98f538c527a2b" HandleID="k8s-pod-network.72714953d14d398451d80bc4d52732d53887e907f90fe629a9e98f538c527a2b" Workload="localhost-k8s-calico--kube--controllers--85b54d65f4--pdsbh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000391ed0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-85b54d65f4-pdsbh", "timestamp":"2026-03-06 01:36:49.134409408 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0006b8420)} Mar 6 01:36:49.600258 containerd[1459]: 2026-03-06 01:36:49.199 [INFO][4080] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:36:49.600258 containerd[1459]: 2026-03-06 01:36:49.220 [INFO][4080] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:36:49.600258 containerd[1459]: 2026-03-06 01:36:49.222 [INFO][4080] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 6 01:36:49.600258 containerd[1459]: 2026-03-06 01:36:49.231 [INFO][4080] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.72714953d14d398451d80bc4d52732d53887e907f90fe629a9e98f538c527a2b" host="localhost" Mar 6 01:36:49.600258 containerd[1459]: 2026-03-06 01:36:49.245 [INFO][4080] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 6 01:36:49.600258 containerd[1459]: 2026-03-06 01:36:49.331 [INFO][4080] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 6 01:36:49.600258 containerd[1459]: 2026-03-06 01:36:49.385 [INFO][4080] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 6 01:36:49.600258 containerd[1459]: 2026-03-06 01:36:49.391 [INFO][4080] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 6 01:36:49.600258 containerd[1459]: 2026-03-06 01:36:49.392 [INFO][4080] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.72714953d14d398451d80bc4d52732d53887e907f90fe629a9e98f538c527a2b" host="localhost" Mar 6 01:36:49.600258 containerd[1459]: 2026-03-06 01:36:49.414 [INFO][4080] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.72714953d14d398451d80bc4d52732d53887e907f90fe629a9e98f538c527a2b Mar 6 01:36:49.600258 containerd[1459]: 2026-03-06 01:36:49.436 [INFO][4080] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.72714953d14d398451d80bc4d52732d53887e907f90fe629a9e98f538c527a2b" host="localhost" Mar 6 01:36:49.600258 containerd[1459]: 2026-03-06 01:36:49.495 [INFO][4080] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.72714953d14d398451d80bc4d52732d53887e907f90fe629a9e98f538c527a2b" host="localhost" Mar 6 01:36:49.600258 containerd[1459]: 2026-03-06 01:36:49.497 [INFO][4080] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.72714953d14d398451d80bc4d52732d53887e907f90fe629a9e98f538c527a2b" host="localhost" Mar 6 01:36:49.600258 containerd[1459]: 2026-03-06 01:36:49.499 [INFO][4080] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:36:49.600258 containerd[1459]: 2026-03-06 01:36:49.499 [INFO][4080] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="72714953d14d398451d80bc4d52732d53887e907f90fe629a9e98f538c527a2b" HandleID="k8s-pod-network.72714953d14d398451d80bc4d52732d53887e907f90fe629a9e98f538c527a2b" Workload="localhost-k8s-calico--kube--controllers--85b54d65f4--pdsbh-eth0" Mar 6 01:36:49.604300 containerd[1459]: 2026-03-06 01:36:49.523 [INFO][3993] cni-plugin/k8s.go 418: Populated endpoint ContainerID="72714953d14d398451d80bc4d52732d53887e907f90fe629a9e98f538c527a2b" Namespace="calico-system" Pod="calico-kube-controllers-85b54d65f4-pdsbh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85b54d65f4--pdsbh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--85b54d65f4--pdsbh-eth0", GenerateName:"calico-kube-controllers-85b54d65f4-", Namespace:"calico-system", SelfLink:"", UID:"69b227be-3957-4eec-9624-244977470ca6", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 36, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85b54d65f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-85b54d65f4-pdsbh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8a032a72858", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:36:49.604300 containerd[1459]: 2026-03-06 01:36:49.523 [INFO][3993] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="72714953d14d398451d80bc4d52732d53887e907f90fe629a9e98f538c527a2b" Namespace="calico-system" Pod="calico-kube-controllers-85b54d65f4-pdsbh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85b54d65f4--pdsbh-eth0" Mar 6 01:36:49.604300 containerd[1459]: 2026-03-06 01:36:49.523 [INFO][3993] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8a032a72858 ContainerID="72714953d14d398451d80bc4d52732d53887e907f90fe629a9e98f538c527a2b" Namespace="calico-system" Pod="calico-kube-controllers-85b54d65f4-pdsbh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85b54d65f4--pdsbh-eth0" Mar 6 01:36:49.604300 containerd[1459]: 2026-03-06 01:36:49.566 [INFO][3993] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="72714953d14d398451d80bc4d52732d53887e907f90fe629a9e98f538c527a2b" Namespace="calico-system" Pod="calico-kube-controllers-85b54d65f4-pdsbh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85b54d65f4--pdsbh-eth0" Mar 6 01:36:49.604300 containerd[1459]: 2026-03-06 01:36:49.566 [INFO][3993] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="72714953d14d398451d80bc4d52732d53887e907f90fe629a9e98f538c527a2b" Namespace="calico-system" Pod="calico-kube-controllers-85b54d65f4-pdsbh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85b54d65f4--pdsbh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--85b54d65f4--pdsbh-eth0", GenerateName:"calico-kube-controllers-85b54d65f4-", Namespace:"calico-system", SelfLink:"", UID:"69b227be-3957-4eec-9624-244977470ca6", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 36, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85b54d65f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"72714953d14d398451d80bc4d52732d53887e907f90fe629a9e98f538c527a2b", Pod:"calico-kube-controllers-85b54d65f4-pdsbh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8a032a72858", MAC:"d2:48:cb:1f:a0:cf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:36:49.604300 containerd[1459]: 2026-03-06 01:36:49.591 [INFO][3993] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="72714953d14d398451d80bc4d52732d53887e907f90fe629a9e98f538c527a2b" Namespace="calico-system" Pod="calico-kube-controllers-85b54d65f4-pdsbh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85b54d65f4--pdsbh-eth0" Mar 6 01:36:49.614240 systemd[1]: Started cri-containerd-7acf0a7a3264dbfc836b446d2324c24e56afb9ef331ad621f7320774565d01e8.scope - libcontainer container 7acf0a7a3264dbfc836b446d2324c24e56afb9ef331ad621f7320774565d01e8. Mar 6 01:36:49.660089 containerd[1459]: time="2026-03-06T01:36:49.659536020Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:36:49.660089 containerd[1459]: time="2026-03-06T01:36:49.659734834Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:36:49.660089 containerd[1459]: time="2026-03-06T01:36:49.659752196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:36:49.660089 containerd[1459]: time="2026-03-06T01:36:49.659864437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:36:49.674462 systemd-networkd[1392]: calidc1cd09f19e: Link UP Mar 6 01:36:49.675607 systemd-networkd[1392]: calidc1cd09f19e: Gained carrier Mar 6 01:36:49.690278 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 01:36:49.773475 systemd[1]: Started cri-containerd-944f79b100d18e17fcaed6a23ec72873665544c32f87bf2ba7113611e55d8f2c.scope - libcontainer container 944f79b100d18e17fcaed6a23ec72873665544c32f87bf2ba7113611e55d8f2c. Mar 6 01:36:49.831104 containerd[1459]: time="2026-03-06T01:36:49.830864194Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:36:49.831491 containerd[1459]: time="2026-03-06T01:36:49.831079798Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:36:49.831491 containerd[1459]: time="2026-03-06T01:36:49.831189304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:36:49.831491 containerd[1459]: time="2026-03-06T01:36:49.831299770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:36:49.833633 containerd[1459]: time="2026-03-06T01:36:49.833100581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5858797979-6kp78,Uid:956351bc-0070-433f-97da-835f4c03af41,Namespace:calico-system,Attempt:0,}" Mar 6 01:36:49.835748 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 01:36:49.852968 containerd[1459]: 2026-03-06 01:36:48.966 [ERROR][4030] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 6 01:36:49.852968 containerd[1459]: 2026-03-06 01:36:49.021 [INFO][4030] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--8w5cr-eth0 coredns-674b8bbfcf- kube-system 259cb996-fbdd-4a81-b770-165dc5d9d831 971 0 2026-03-06 01:36:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-8w5cr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidc1cd09f19e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d8d3a3c9253c3302129e4153ed91c9e9292970898a5197debda4740806e12692" Namespace="kube-system" Pod="coredns-674b8bbfcf-8w5cr" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8w5cr-" Mar 6 01:36:49.852968 containerd[1459]: 2026-03-06 01:36:49.021 [INFO][4030] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d8d3a3c9253c3302129e4153ed91c9e9292970898a5197debda4740806e12692" Namespace="kube-system" Pod="coredns-674b8bbfcf-8w5cr" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8w5cr-eth0" Mar 6 01:36:49.852968 containerd[1459]: 2026-03-06 01:36:49.247 [INFO][4103] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d8d3a3c9253c3302129e4153ed91c9e9292970898a5197debda4740806e12692" HandleID="k8s-pod-network.d8d3a3c9253c3302129e4153ed91c9e9292970898a5197debda4740806e12692" Workload="localhost-k8s-coredns--674b8bbfcf--8w5cr-eth0" Mar 6 01:36:49.852968 containerd[1459]: 2026-03-06 01:36:49.285 [INFO][4103] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d8d3a3c9253c3302129e4153ed91c9e9292970898a5197debda4740806e12692" HandleID="k8s-pod-network.d8d3a3c9253c3302129e4153ed91c9e9292970898a5197debda4740806e12692" Workload="localhost-k8s-coredns--674b8bbfcf--8w5cr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000643e50), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-8w5cr", "timestamp":"2026-03-06 01:36:49.247119848 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000142000)} Mar 6 01:36:49.852968 containerd[1459]: 2026-03-06 01:36:49.286 [INFO][4103] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:36:49.852968 containerd[1459]: 2026-03-06 01:36:49.497 [INFO][4103] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:36:49.852968 containerd[1459]: 2026-03-06 01:36:49.497 [INFO][4103] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 6 01:36:49.852968 containerd[1459]: 2026-03-06 01:36:49.520 [INFO][4103] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d8d3a3c9253c3302129e4153ed91c9e9292970898a5197debda4740806e12692" host="localhost" Mar 6 01:36:49.852968 containerd[1459]: 2026-03-06 01:36:49.548 [INFO][4103] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 6 01:36:49.852968 containerd[1459]: 2026-03-06 01:36:49.601 [INFO][4103] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 6 01:36:49.852968 containerd[1459]: 2026-03-06 01:36:49.611 [INFO][4103] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 6 01:36:49.852968 containerd[1459]: 2026-03-06 01:36:49.622 [INFO][4103] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 6 01:36:49.852968 containerd[1459]: 2026-03-06 01:36:49.623 [INFO][4103] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d8d3a3c9253c3302129e4153ed91c9e9292970898a5197debda4740806e12692" host="localhost" Mar 6 01:36:49.852968 containerd[1459]: 2026-03-06 01:36:49.627 [INFO][4103] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d8d3a3c9253c3302129e4153ed91c9e9292970898a5197debda4740806e12692 Mar 6 01:36:49.852968 containerd[1459]: 2026-03-06 01:36:49.637 [INFO][4103] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d8d3a3c9253c3302129e4153ed91c9e9292970898a5197debda4740806e12692" host="localhost" Mar 6 01:36:49.852968 containerd[1459]: 2026-03-06 01:36:49.652 [INFO][4103] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.d8d3a3c9253c3302129e4153ed91c9e9292970898a5197debda4740806e12692" host="localhost" Mar 6 01:36:49.852968 containerd[1459]: 2026-03-06 01:36:49.660 [INFO][4103] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.d8d3a3c9253c3302129e4153ed91c9e9292970898a5197debda4740806e12692" host="localhost" Mar 6 01:36:49.852968 containerd[1459]: 2026-03-06 01:36:49.660 [INFO][4103] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:36:49.852968 containerd[1459]: 2026-03-06 01:36:49.660 [INFO][4103] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="d8d3a3c9253c3302129e4153ed91c9e9292970898a5197debda4740806e12692" HandleID="k8s-pod-network.d8d3a3c9253c3302129e4153ed91c9e9292970898a5197debda4740806e12692" Workload="localhost-k8s-coredns--674b8bbfcf--8w5cr-eth0" Mar 6 01:36:49.853792 containerd[1459]: 2026-03-06 01:36:49.668 [INFO][4030] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d8d3a3c9253c3302129e4153ed91c9e9292970898a5197debda4740806e12692" Namespace="kube-system" Pod="coredns-674b8bbfcf-8w5cr" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8w5cr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--8w5cr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"259cb996-fbdd-4a81-b770-165dc5d9d831", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 36, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-8w5cr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidc1cd09f19e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:36:49.853792 containerd[1459]: 2026-03-06 01:36:49.669 [INFO][4030] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="d8d3a3c9253c3302129e4153ed91c9e9292970898a5197debda4740806e12692" Namespace="kube-system" Pod="coredns-674b8bbfcf-8w5cr" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8w5cr-eth0" Mar 6 01:36:49.853792 containerd[1459]: 2026-03-06 01:36:49.669 [INFO][4030] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidc1cd09f19e ContainerID="d8d3a3c9253c3302129e4153ed91c9e9292970898a5197debda4740806e12692" Namespace="kube-system" Pod="coredns-674b8bbfcf-8w5cr" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8w5cr-eth0" Mar 6 01:36:49.853792 containerd[1459]: 2026-03-06 01:36:49.688 [INFO][4030] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d8d3a3c9253c3302129e4153ed91c9e9292970898a5197debda4740806e12692" Namespace="kube-system" Pod="coredns-674b8bbfcf-8w5cr" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8w5cr-eth0" Mar 6 01:36:49.853792 containerd[1459]: 2026-03-06 01:36:49.715 [INFO][4030] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d8d3a3c9253c3302129e4153ed91c9e9292970898a5197debda4740806e12692" Namespace="kube-system" Pod="coredns-674b8bbfcf-8w5cr" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8w5cr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--8w5cr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"259cb996-fbdd-4a81-b770-165dc5d9d831", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 36, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d8d3a3c9253c3302129e4153ed91c9e9292970898a5197debda4740806e12692", Pod:"coredns-674b8bbfcf-8w5cr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidc1cd09f19e", MAC:"da:26:bc:b0:bd:70", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:36:49.853792 containerd[1459]: 2026-03-06 01:36:49.836 [INFO][4030] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d8d3a3c9253c3302129e4153ed91c9e9292970898a5197debda4740806e12692" Namespace="kube-system" Pod="coredns-674b8bbfcf-8w5cr" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8w5cr-eth0" Mar 6 01:36:49.928176 systemd[1]: Started cri-containerd-72714953d14d398451d80bc4d52732d53887e907f90fe629a9e98f538c527a2b.scope - libcontainer container 72714953d14d398451d80bc4d52732d53887e907f90fe629a9e98f538c527a2b. Mar 6 01:36:49.949150 systemd-networkd[1392]: cali417624052ad: Link UP Mar 6 01:36:49.951478 systemd-networkd[1392]: cali417624052ad: Gained carrier Mar 6 01:36:49.998135 containerd[1459]: time="2026-03-06T01:36:49.996863126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cfdc76bfd-kz2gs,Uid:1287fe69-fec1-4536-8130-77a9ee5d6e26,Namespace:calico-system,Attempt:1,} returns sandbox id \"9f7f8919af3fb117a3ee7525c8161387c15e6d2874dcd310c97f6073afff886f\"" Mar 6 01:36:50.015995 containerd[1459]: time="2026-03-06T01:36:50.015733868Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 6 01:36:50.018562 containerd[1459]: time="2026-03-06T01:36:50.016521264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kfs88,Uid:885266c2-0ca6-482d-827c-cc1c88e284cf,Namespace:calico-system,Attempt:1,} returns sandbox id \"7acf0a7a3264dbfc836b446d2324c24e56afb9ef331ad621f7320774565d01e8\"" Mar 6 01:36:50.019503 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 01:36:50.022150 containerd[1459]: 2026-03-06 01:36:48.921 [ERROR][4043] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 6 01:36:50.022150 containerd[1459]: 2026-03-06 01:36:48.956 [INFO][4043] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--vdjph-eth0 coredns-674b8bbfcf- kube-system 8a67ece0-9c61-4759-9222-15c2c383bab1 972 0 2026-03-06 01:36:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-vdjph eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali417624052ad [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3c96c60c2fa427cf60a7b7b561d0474430c76a019f252efd4a071fbc2519c838" Namespace="kube-system" Pod="coredns-674b8bbfcf-vdjph" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--vdjph-" Mar 6 01:36:50.022150 containerd[1459]: 2026-03-06 01:36:48.956 [INFO][4043] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3c96c60c2fa427cf60a7b7b561d0474430c76a019f252efd4a071fbc2519c838" Namespace="kube-system" Pod="coredns-674b8bbfcf-vdjph" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--vdjph-eth0" Mar 6 01:36:50.022150 containerd[1459]: 2026-03-06 01:36:49.237 [INFO][4089] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3c96c60c2fa427cf60a7b7b561d0474430c76a019f252efd4a071fbc2519c838" HandleID="k8s-pod-network.3c96c60c2fa427cf60a7b7b561d0474430c76a019f252efd4a071fbc2519c838" Workload="localhost-k8s-coredns--674b8bbfcf--vdjph-eth0" Mar 6 01:36:50.022150 containerd[1459]: 2026-03-06 01:36:49.308 [INFO][4089] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3c96c60c2fa427cf60a7b7b561d0474430c76a019f252efd4a071fbc2519c838" HandleID="k8s-pod-network.3c96c60c2fa427cf60a7b7b561d0474430c76a019f252efd4a071fbc2519c838" Workload="localhost-k8s-coredns--674b8bbfcf--vdjph-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000577f50), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-vdjph", "timestamp":"2026-03-06 01:36:49.23712203 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000738b00)} Mar 6 01:36:50.022150 containerd[1459]: 2026-03-06 01:36:49.309 [INFO][4089] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:36:50.022150 containerd[1459]: 2026-03-06 01:36:49.663 [INFO][4089] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:36:50.022150 containerd[1459]: 2026-03-06 01:36:49.663 [INFO][4089] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 6 01:36:50.022150 containerd[1459]: 2026-03-06 01:36:49.667 [INFO][4089] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3c96c60c2fa427cf60a7b7b561d0474430c76a019f252efd4a071fbc2519c838" host="localhost" Mar 6 01:36:50.022150 containerd[1459]: 2026-03-06 01:36:49.683 [INFO][4089] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 6 01:36:50.022150 containerd[1459]: 2026-03-06 01:36:49.702 [INFO][4089] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 6 01:36:50.022150 containerd[1459]: 2026-03-06 01:36:49.710 [INFO][4089] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 6 01:36:50.022150 containerd[1459]: 2026-03-06 01:36:49.720 [INFO][4089] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 6 01:36:50.022150 containerd[1459]: 2026-03-06 01:36:49.721 [INFO][4089] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3c96c60c2fa427cf60a7b7b561d0474430c76a019f252efd4a071fbc2519c838" host="localhost" Mar 6 01:36:50.022150 containerd[1459]: 2026-03-06 01:36:49.725 [INFO][4089] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3c96c60c2fa427cf60a7b7b561d0474430c76a019f252efd4a071fbc2519c838 Mar 6 01:36:50.022150 containerd[1459]: 2026-03-06 01:36:49.772 [INFO][4089] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3c96c60c2fa427cf60a7b7b561d0474430c76a019f252efd4a071fbc2519c838" host="localhost" Mar 6 01:36:50.022150 containerd[1459]: 2026-03-06 01:36:49.862 [INFO][4089] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.3c96c60c2fa427cf60a7b7b561d0474430c76a019f252efd4a071fbc2519c838" host="localhost" Mar 6 01:36:50.022150 containerd[1459]: 2026-03-06 01:36:49.862 [INFO][4089] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.3c96c60c2fa427cf60a7b7b561d0474430c76a019f252efd4a071fbc2519c838" host="localhost" Mar 6 01:36:50.022150 containerd[1459]: 2026-03-06 01:36:49.862 [INFO][4089] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:36:50.022150 containerd[1459]: 2026-03-06 01:36:49.863 [INFO][4089] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="3c96c60c2fa427cf60a7b7b561d0474430c76a019f252efd4a071fbc2519c838" HandleID="k8s-pod-network.3c96c60c2fa427cf60a7b7b561d0474430c76a019f252efd4a071fbc2519c838" Workload="localhost-k8s-coredns--674b8bbfcf--vdjph-eth0" Mar 6 01:36:50.023820 containerd[1459]: 2026-03-06 01:36:49.927 [INFO][4043] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3c96c60c2fa427cf60a7b7b561d0474430c76a019f252efd4a071fbc2519c838" Namespace="kube-system" Pod="coredns-674b8bbfcf-vdjph" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--vdjph-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--vdjph-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8a67ece0-9c61-4759-9222-15c2c383bab1", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 36, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-vdjph", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali417624052ad", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:36:50.023820 containerd[1459]: 2026-03-06 01:36:49.928 [INFO][4043] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="3c96c60c2fa427cf60a7b7b561d0474430c76a019f252efd4a071fbc2519c838" Namespace="kube-system" Pod="coredns-674b8bbfcf-vdjph" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--vdjph-eth0" Mar 6 01:36:50.023820 containerd[1459]: 2026-03-06 01:36:49.929 [INFO][4043] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali417624052ad ContainerID="3c96c60c2fa427cf60a7b7b561d0474430c76a019f252efd4a071fbc2519c838" Namespace="kube-system" Pod="coredns-674b8bbfcf-vdjph" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--vdjph-eth0" Mar 6 01:36:50.023820 containerd[1459]: 2026-03-06 01:36:49.953 [INFO][4043] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3c96c60c2fa427cf60a7b7b561d0474430c76a019f252efd4a071fbc2519c838" Namespace="kube-system" Pod="coredns-674b8bbfcf-vdjph" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--vdjph-eth0" Mar 6 01:36:50.023820 containerd[1459]: 2026-03-06 01:36:49.954 [INFO][4043] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3c96c60c2fa427cf60a7b7b561d0474430c76a019f252efd4a071fbc2519c838" Namespace="kube-system" Pod="coredns-674b8bbfcf-vdjph" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--vdjph-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--vdjph-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8a67ece0-9c61-4759-9222-15c2c383bab1", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 36, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3c96c60c2fa427cf60a7b7b561d0474430c76a019f252efd4a071fbc2519c838", Pod:"coredns-674b8bbfcf-vdjph", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali417624052ad", MAC:"62:2b:5d:98:26:81", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:36:50.023820 containerd[1459]: 2026-03-06 01:36:49.987 [INFO][4043] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3c96c60c2fa427cf60a7b7b561d0474430c76a019f252efd4a071fbc2519c838" Namespace="kube-system" Pod="coredns-674b8bbfcf-vdjph" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--vdjph-eth0" Mar 6 01:36:50.041004 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 01:36:50.174750 systemd-networkd[1392]: calic34a6eb9234: Gained IPv6LL Mar 6 01:36:50.370751 containerd[1459]: time="2026-03-06T01:36:50.370559244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cfdc76bfd-tpll8,Uid:cd4720ad-a0f0-4e45-919b-1e9f495dfc32,Namespace:calico-system,Attempt:1,} returns sandbox id \"944f79b100d18e17fcaed6a23ec72873665544c32f87bf2ba7113611e55d8f2c\"" Mar 6 01:36:50.375605 containerd[1459]: time="2026-03-06T01:36:50.375243961Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:36:50.375605 containerd[1459]: time="2026-03-06T01:36:50.375322889Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:36:50.375605 containerd[1459]: time="2026-03-06T01:36:50.375409581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:36:50.379324 containerd[1459]: time="2026-03-06T01:36:50.375606821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:36:50.400202 systemd-networkd[1392]: calia66cd50184c: Link UP Mar 6 01:36:50.417149 systemd-networkd[1392]: calia66cd50184c: Gained carrier Mar 6 01:36:50.419078 systemd-networkd[1392]: caliac1c911f2c8: Gained IPv6LL Mar 6 01:36:50.493791 containerd[1459]: time="2026-03-06T01:36:50.493672769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85b54d65f4-pdsbh,Uid:69b227be-3957-4eec-9624-244977470ca6,Namespace:calico-system,Attempt:1,} returns sandbox id \"72714953d14d398451d80bc4d52732d53887e907f90fe629a9e98f538c527a2b\"" Mar 6 01:36:50.524145 containerd[1459]: 2026-03-06 01:36:48.962 [ERROR][4018] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 6 01:36:50.524145 containerd[1459]: 2026-03-06 01:36:49.021 [INFO][4018] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--5b85766d88--7867t-eth0 goldmane-5b85766d88- calico-system 9f2da77b-8614-4197-bbb1-1398be46188f 973 0 2026-03-06 01:36:25 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-5b85766d88-7867t eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calia66cd50184c [] [] }} ContainerID="a10859b621dda9f1b43935f8d6068f9af8c1c4811c7a8010c7bcd5659c0b2985" Namespace="calico-system" Pod="goldmane-5b85766d88-7867t" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--7867t-" Mar 6 01:36:50.524145 containerd[1459]: 2026-03-06 01:36:49.025 [INFO][4018] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a10859b621dda9f1b43935f8d6068f9af8c1c4811c7a8010c7bcd5659c0b2985" Namespace="calico-system" Pod="goldmane-5b85766d88-7867t" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--7867t-eth0" Mar 6 01:36:50.524145 containerd[1459]: 2026-03-06 01:36:49.327 [INFO][4112] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a10859b621dda9f1b43935f8d6068f9af8c1c4811c7a8010c7bcd5659c0b2985" HandleID="k8s-pod-network.a10859b621dda9f1b43935f8d6068f9af8c1c4811c7a8010c7bcd5659c0b2985" Workload="localhost-k8s-goldmane--5b85766d88--7867t-eth0" Mar 6 01:36:50.524145 containerd[1459]: 2026-03-06 01:36:49.405 [INFO][4112] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a10859b621dda9f1b43935f8d6068f9af8c1c4811c7a8010c7bcd5659c0b2985" HandleID="k8s-pod-network.a10859b621dda9f1b43935f8d6068f9af8c1c4811c7a8010c7bcd5659c0b2985" Workload="localhost-k8s-goldmane--5b85766d88--7867t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ebc0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-5b85766d88-7867t", "timestamp":"2026-03-06 01:36:49.327436561 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000154000)} Mar 6 01:36:50.524145 containerd[1459]: 2026-03-06 01:36:49.405 [INFO][4112] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:36:50.524145 containerd[1459]: 2026-03-06 01:36:49.863 [INFO][4112] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:36:50.524145 containerd[1459]: 2026-03-06 01:36:49.863 [INFO][4112] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 6 01:36:50.524145 containerd[1459]: 2026-03-06 01:36:49.868 [INFO][4112] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a10859b621dda9f1b43935f8d6068f9af8c1c4811c7a8010c7bcd5659c0b2985" host="localhost" Mar 6 01:36:50.524145 containerd[1459]: 2026-03-06 01:36:49.888 [INFO][4112] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 6 01:36:50.524145 containerd[1459]: 2026-03-06 01:36:49.902 [INFO][4112] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 6 01:36:50.524145 containerd[1459]: 2026-03-06 01:36:49.929 [INFO][4112] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 6 01:36:50.524145 containerd[1459]: 2026-03-06 01:36:49.975 [INFO][4112] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 6 01:36:50.524145 containerd[1459]: 2026-03-06 01:36:49.976 [INFO][4112] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a10859b621dda9f1b43935f8d6068f9af8c1c4811c7a8010c7bcd5659c0b2985" host="localhost" Mar 6 01:36:50.524145 containerd[1459]: 2026-03-06 01:36:49.989 [INFO][4112] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a10859b621dda9f1b43935f8d6068f9af8c1c4811c7a8010c7bcd5659c0b2985 Mar 6 01:36:50.524145 containerd[1459]: 2026-03-06 01:36:50.012 [INFO][4112] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a10859b621dda9f1b43935f8d6068f9af8c1c4811c7a8010c7bcd5659c0b2985" host="localhost" Mar 6 01:36:50.524145 containerd[1459]: 2026-03-06 01:36:50.278 [INFO][4112] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.a10859b621dda9f1b43935f8d6068f9af8c1c4811c7a8010c7bcd5659c0b2985" host="localhost" Mar 6 01:36:50.524145 containerd[1459]: 2026-03-06 01:36:50.278 [INFO][4112] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.a10859b621dda9f1b43935f8d6068f9af8c1c4811c7a8010c7bcd5659c0b2985" host="localhost" Mar 6 01:36:50.524145 containerd[1459]: 2026-03-06 01:36:50.288 [INFO][4112] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:36:50.524145 containerd[1459]: 2026-03-06 01:36:50.289 [INFO][4112] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="a10859b621dda9f1b43935f8d6068f9af8c1c4811c7a8010c7bcd5659c0b2985" HandleID="k8s-pod-network.a10859b621dda9f1b43935f8d6068f9af8c1c4811c7a8010c7bcd5659c0b2985" Workload="localhost-k8s-goldmane--5b85766d88--7867t-eth0" Mar 6 01:36:50.525778 containerd[1459]: 2026-03-06 01:36:50.320 [INFO][4018] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a10859b621dda9f1b43935f8d6068f9af8c1c4811c7a8010c7bcd5659c0b2985" Namespace="calico-system" Pod="goldmane-5b85766d88-7867t" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--7867t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--7867t-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"9f2da77b-8614-4197-bbb1-1398be46188f", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 36, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-5b85766d88-7867t", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia66cd50184c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:36:50.525778 containerd[1459]: 2026-03-06 01:36:50.327 [INFO][4018] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="a10859b621dda9f1b43935f8d6068f9af8c1c4811c7a8010c7bcd5659c0b2985" Namespace="calico-system" Pod="goldmane-5b85766d88-7867t" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--7867t-eth0" Mar 6 01:36:50.525778 containerd[1459]: 2026-03-06 01:36:50.327 [INFO][4018] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia66cd50184c ContainerID="a10859b621dda9f1b43935f8d6068f9af8c1c4811c7a8010c7bcd5659c0b2985" Namespace="calico-system" Pod="goldmane-5b85766d88-7867t" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--7867t-eth0" Mar 6 01:36:50.525778 containerd[1459]: 2026-03-06 01:36:50.427 [INFO][4018] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a10859b621dda9f1b43935f8d6068f9af8c1c4811c7a8010c7bcd5659c0b2985" Namespace="calico-system" Pod="goldmane-5b85766d88-7867t" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--7867t-eth0" Mar 6 01:36:50.525778 containerd[1459]: 2026-03-06 01:36:50.428 [INFO][4018] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a10859b621dda9f1b43935f8d6068f9af8c1c4811c7a8010c7bcd5659c0b2985" Namespace="calico-system" Pod="goldmane-5b85766d88-7867t" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--7867t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--7867t-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"9f2da77b-8614-4197-bbb1-1398be46188f", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 36, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a10859b621dda9f1b43935f8d6068f9af8c1c4811c7a8010c7bcd5659c0b2985", Pod:"goldmane-5b85766d88-7867t", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia66cd50184c", MAC:"66:31:6c:6f:90:0f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:36:50.525778 containerd[1459]: 2026-03-06 01:36:50.481 [INFO][4018] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a10859b621dda9f1b43935f8d6068f9af8c1c4811c7a8010c7bcd5659c0b2985" Namespace="calico-system" Pod="goldmane-5b85766d88-7867t" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--7867t-eth0" Mar 6 01:36:50.545847 containerd[1459]: time="2026-03-06T01:36:50.539195508Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:36:50.545847 containerd[1459]: time="2026-03-06T01:36:50.539279786Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:36:50.545847 containerd[1459]: time="2026-03-06T01:36:50.539315803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:36:50.545847 containerd[1459]: time="2026-03-06T01:36:50.539477877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:36:50.606205 systemd[1]: Started cri-containerd-d8d3a3c9253c3302129e4153ed91c9e9292970898a5197debda4740806e12692.scope - libcontainer container d8d3a3c9253c3302129e4153ed91c9e9292970898a5197debda4740806e12692. Mar 6 01:36:50.650792 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 01:36:50.651250 systemd[1]: Started cri-containerd-3c96c60c2fa427cf60a7b7b561d0474430c76a019f252efd4a071fbc2519c838.scope - libcontainer container 3c96c60c2fa427cf60a7b7b561d0474430c76a019f252efd4a071fbc2519c838. Mar 6 01:36:50.700870 containerd[1459]: time="2026-03-06T01:36:50.700684630Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:36:50.701523 containerd[1459]: time="2026-03-06T01:36:50.701219542Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:36:50.701741 containerd[1459]: time="2026-03-06T01:36:50.701699682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:36:50.704185 containerd[1459]: time="2026-03-06T01:36:50.704075508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:36:50.737751 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 01:36:50.792391 containerd[1459]: time="2026-03-06T01:36:50.792304164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8w5cr,Uid:259cb996-fbdd-4a81-b770-165dc5d9d831,Namespace:kube-system,Attempt:1,} returns sandbox id \"d8d3a3c9253c3302129e4153ed91c9e9292970898a5197debda4740806e12692\"" Mar 6 01:36:50.793576 kubelet[2549]: E0306 01:36:50.793541 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:36:50.806270 containerd[1459]: time="2026-03-06T01:36:50.805872795Z" level=info msg="CreateContainer within sandbox \"d8d3a3c9253c3302129e4153ed91c9e9292970898a5197debda4740806e12692\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 6 01:36:50.830256 systemd[1]: Started cri-containerd-a10859b621dda9f1b43935f8d6068f9af8c1c4811c7a8010c7bcd5659c0b2985.scope - libcontainer container a10859b621dda9f1b43935f8d6068f9af8c1c4811c7a8010c7bcd5659c0b2985. Mar 6 01:36:50.909569 containerd[1459]: time="2026-03-06T01:36:50.908817943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vdjph,Uid:8a67ece0-9c61-4759-9222-15c2c383bab1,Namespace:kube-system,Attempt:1,} returns sandbox id \"3c96c60c2fa427cf60a7b7b561d0474430c76a019f252efd4a071fbc2519c838\"" Mar 6 01:36:50.913406 kubelet[2549]: E0306 01:36:50.912734 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:36:50.918196 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3193304951.mount: Deactivated successfully. Mar 6 01:36:50.929497 containerd[1459]: time="2026-03-06T01:36:50.928265680Z" level=info msg="CreateContainer within sandbox \"3c96c60c2fa427cf60a7b7b561d0474430c76a019f252efd4a071fbc2519c838\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 6 01:36:50.944842 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount657084211.mount: Deactivated successfully. Mar 6 01:36:50.954998 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 01:36:50.982157 containerd[1459]: time="2026-03-06T01:36:50.982068873Z" level=info msg="CreateContainer within sandbox \"d8d3a3c9253c3302129e4153ed91c9e9292970898a5197debda4740806e12692\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c52b75650ed7f993d394209910507310f431721a15474484e13164f1c0126d5b\"" Mar 6 01:36:50.984993 containerd[1459]: time="2026-03-06T01:36:50.984243400Z" level=info msg="StartContainer for \"c52b75650ed7f993d394209910507310f431721a15474484e13164f1c0126d5b\"" Mar 6 01:36:50.998474 containerd[1459]: time="2026-03-06T01:36:50.998235856Z" level=info msg="CreateContainer within sandbox \"3c96c60c2fa427cf60a7b7b561d0474430c76a019f252efd4a071fbc2519c838\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"87e75160c278406dfe1960fe1d403e9c17cf727aaa2d97ab51535770b9d002a2\"" Mar 6 01:36:50.999024 kubelet[2549]: I0306 01:36:50.998596 2549 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50410f0e-4b03-4463-ad7c-49d16c007f3a" path="/var/lib/kubelet/pods/50410f0e-4b03-4463-ad7c-49d16c007f3a/volumes" Mar 6 01:36:51.003276 containerd[1459]: time="2026-03-06T01:36:51.003224261Z" level=info msg="StartContainer for \"87e75160c278406dfe1960fe1d403e9c17cf727aaa2d97ab51535770b9d002a2\"" Mar 6 01:36:51.018445 systemd-networkd[1392]: cali2d065ff2896: Link UP Mar 6 01:36:51.020147 systemd-networkd[1392]: cali2d065ff2896: Gained carrier Mar 6 01:36:51.060601 containerd[1459]: 2026-03-06 01:36:50.419 [ERROR][4438] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 6 01:36:51.060601 containerd[1459]: 2026-03-06 01:36:50.524 [INFO][4438] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5858797979--6kp78-eth0 whisker-5858797979- calico-system 956351bc-0070-433f-97da-835f4c03af41 1010 0 2026-03-06 01:36:49 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5858797979 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5858797979-6kp78 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali2d065ff2896 [] [] }} ContainerID="ba6d583a9b3e63ec05a933698a7e0c67e2a79ab4cf9d41f993465ff406292fc4" Namespace="calico-system" Pod="whisker-5858797979-6kp78" WorkloadEndpoint="localhost-k8s-whisker--5858797979--6kp78-" Mar 6 01:36:51.060601 containerd[1459]: 2026-03-06 01:36:50.525 [INFO][4438] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ba6d583a9b3e63ec05a933698a7e0c67e2a79ab4cf9d41f993465ff406292fc4" Namespace="calico-system" Pod="whisker-5858797979-6kp78" WorkloadEndpoint="localhost-k8s-whisker--5858797979--6kp78-eth0" Mar 6 01:36:51.060601 containerd[1459]: 2026-03-06 01:36:50.890 [INFO][4550] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ba6d583a9b3e63ec05a933698a7e0c67e2a79ab4cf9d41f993465ff406292fc4" HandleID="k8s-pod-network.ba6d583a9b3e63ec05a933698a7e0c67e2a79ab4cf9d41f993465ff406292fc4" Workload="localhost-k8s-whisker--5858797979--6kp78-eth0" Mar 6 01:36:51.060601 containerd[1459]: 2026-03-06 01:36:50.904 [INFO][4550] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ba6d583a9b3e63ec05a933698a7e0c67e2a79ab4cf9d41f993465ff406292fc4" HandleID="k8s-pod-network.ba6d583a9b3e63ec05a933698a7e0c67e2a79ab4cf9d41f993465ff406292fc4" Workload="localhost-k8s-whisker--5858797979--6kp78-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001bbe30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5858797979-6kp78", "timestamp":"2026-03-06 01:36:50.890524813 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00034f080)} Mar 6 01:36:51.060601 containerd[1459]: 2026-03-06 01:36:50.904 [INFO][4550] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:36:51.060601 containerd[1459]: 2026-03-06 01:36:50.904 [INFO][4550] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:36:51.060601 containerd[1459]: 2026-03-06 01:36:50.904 [INFO][4550] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 6 01:36:51.060601 containerd[1459]: 2026-03-06 01:36:50.909 [INFO][4550] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ba6d583a9b3e63ec05a933698a7e0c67e2a79ab4cf9d41f993465ff406292fc4" host="localhost" Mar 6 01:36:51.060601 containerd[1459]: 2026-03-06 01:36:50.948 [INFO][4550] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 6 01:36:51.060601 containerd[1459]: 2026-03-06 01:36:50.964 [INFO][4550] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 6 01:36:51.060601 containerd[1459]: 2026-03-06 01:36:50.969 [INFO][4550] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 6 01:36:51.060601 containerd[1459]: 2026-03-06 01:36:50.974 [INFO][4550] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 6 01:36:51.060601 containerd[1459]: 2026-03-06 01:36:50.974 [INFO][4550] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ba6d583a9b3e63ec05a933698a7e0c67e2a79ab4cf9d41f993465ff406292fc4" host="localhost" Mar 6 01:36:51.060601 containerd[1459]: 2026-03-06 01:36:50.978 [INFO][4550] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ba6d583a9b3e63ec05a933698a7e0c67e2a79ab4cf9d41f993465ff406292fc4 Mar 6 01:36:51.060601 containerd[1459]: 2026-03-06 01:36:50.989 [INFO][4550] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ba6d583a9b3e63ec05a933698a7e0c67e2a79ab4cf9d41f993465ff406292fc4" host="localhost" Mar 6 01:36:51.060601 containerd[1459]: 2026-03-06 01:36:51.007 [INFO][4550] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.ba6d583a9b3e63ec05a933698a7e0c67e2a79ab4cf9d41f993465ff406292fc4" host="localhost" Mar 6 01:36:51.060601 containerd[1459]: 2026-03-06 01:36:51.007 [INFO][4550] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.ba6d583a9b3e63ec05a933698a7e0c67e2a79ab4cf9d41f993465ff406292fc4" host="localhost" Mar 6 01:36:51.060601 containerd[1459]: 2026-03-06 01:36:51.007 [INFO][4550] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:36:51.060601 containerd[1459]: 2026-03-06 01:36:51.007 [INFO][4550] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="ba6d583a9b3e63ec05a933698a7e0c67e2a79ab4cf9d41f993465ff406292fc4" HandleID="k8s-pod-network.ba6d583a9b3e63ec05a933698a7e0c67e2a79ab4cf9d41f993465ff406292fc4" Workload="localhost-k8s-whisker--5858797979--6kp78-eth0" Mar 6 01:36:51.060444 systemd-networkd[1392]: calif7a7c6c0897: Gained IPv6LL Mar 6 01:36:51.061307 containerd[1459]: 2026-03-06 01:36:51.015 [INFO][4438] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ba6d583a9b3e63ec05a933698a7e0c67e2a79ab4cf9d41f993465ff406292fc4" Namespace="calico-system" Pod="whisker-5858797979-6kp78" WorkloadEndpoint="localhost-k8s-whisker--5858797979--6kp78-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5858797979--6kp78-eth0", GenerateName:"whisker-5858797979-", Namespace:"calico-system", SelfLink:"", UID:"956351bc-0070-433f-97da-835f4c03af41", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 36, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5858797979", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5858797979-6kp78", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2d065ff2896", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:36:51.061307 containerd[1459]: 2026-03-06 01:36:51.015 [INFO][4438] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="ba6d583a9b3e63ec05a933698a7e0c67e2a79ab4cf9d41f993465ff406292fc4" Namespace="calico-system" Pod="whisker-5858797979-6kp78" WorkloadEndpoint="localhost-k8s-whisker--5858797979--6kp78-eth0" Mar 6 01:36:51.061307 containerd[1459]: 2026-03-06 01:36:51.015 [INFO][4438] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2d065ff2896 ContainerID="ba6d583a9b3e63ec05a933698a7e0c67e2a79ab4cf9d41f993465ff406292fc4" Namespace="calico-system" Pod="whisker-5858797979-6kp78" WorkloadEndpoint="localhost-k8s-whisker--5858797979--6kp78-eth0" Mar 6 01:36:51.061307 containerd[1459]: 2026-03-06 01:36:51.021 [INFO][4438] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ba6d583a9b3e63ec05a933698a7e0c67e2a79ab4cf9d41f993465ff406292fc4" Namespace="calico-system" Pod="whisker-5858797979-6kp78" WorkloadEndpoint="localhost-k8s-whisker--5858797979--6kp78-eth0" Mar 6 01:36:51.061307 containerd[1459]: 2026-03-06 01:36:51.021 [INFO][4438] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ba6d583a9b3e63ec05a933698a7e0c67e2a79ab4cf9d41f993465ff406292fc4" Namespace="calico-system" Pod="whisker-5858797979-6kp78" WorkloadEndpoint="localhost-k8s-whisker--5858797979--6kp78-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5858797979--6kp78-eth0", GenerateName:"whisker-5858797979-", Namespace:"calico-system", SelfLink:"", UID:"956351bc-0070-433f-97da-835f4c03af41", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 36, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5858797979", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ba6d583a9b3e63ec05a933698a7e0c67e2a79ab4cf9d41f993465ff406292fc4", Pod:"whisker-5858797979-6kp78", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2d065ff2896", MAC:"2e:ca:e8:10:81:2b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:36:51.061307 containerd[1459]: 2026-03-06 01:36:51.035 [INFO][4438] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ba6d583a9b3e63ec05a933698a7e0c67e2a79ab4cf9d41f993465ff406292fc4" Namespace="calico-system" Pod="whisker-5858797979-6kp78" WorkloadEndpoint="localhost-k8s-whisker--5858797979--6kp78-eth0" Mar 6 01:36:51.082240 systemd[1]: Started cri-containerd-c52b75650ed7f993d394209910507310f431721a15474484e13164f1c0126d5b.scope - libcontainer container c52b75650ed7f993d394209910507310f431721a15474484e13164f1c0126d5b. Mar 6 01:36:51.088390 containerd[1459]: time="2026-03-06T01:36:51.088037923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-7867t,Uid:9f2da77b-8614-4197-bbb1-1398be46188f,Namespace:calico-system,Attempt:1,} returns sandbox id \"a10859b621dda9f1b43935f8d6068f9af8c1c4811c7a8010c7bcd5659c0b2985\"" Mar 6 01:36:51.106159 systemd[1]: Started cri-containerd-87e75160c278406dfe1960fe1d403e9c17cf727aaa2d97ab51535770b9d002a2.scope - libcontainer container 87e75160c278406dfe1960fe1d403e9c17cf727aaa2d97ab51535770b9d002a2. Mar 6 01:36:51.112400 kernel: calico-node[4145]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 6 01:36:51.145536 containerd[1459]: time="2026-03-06T01:36:51.144115483Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:36:51.145536 containerd[1459]: time="2026-03-06T01:36:51.144228054Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:36:51.145536 containerd[1459]: time="2026-03-06T01:36:51.144238734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:36:51.145536 containerd[1459]: time="2026-03-06T01:36:51.144493692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:36:51.172691 containerd[1459]: time="2026-03-06T01:36:51.172521544Z" level=info msg="StartContainer for \"c52b75650ed7f993d394209910507310f431721a15474484e13164f1c0126d5b\" returns successfully" Mar 6 01:36:51.182597 containerd[1459]: time="2026-03-06T01:36:51.182558969Z" level=info msg="StartContainer for \"87e75160c278406dfe1960fe1d403e9c17cf727aaa2d97ab51535770b9d002a2\" returns successfully" Mar 6 01:36:51.197694 systemd-networkd[1392]: cali417624052ad: Gained IPv6LL Mar 6 01:36:51.237284 systemd[1]: Started cri-containerd-ba6d583a9b3e63ec05a933698a7e0c67e2a79ab4cf9d41f993465ff406292fc4.scope - libcontainer container ba6d583a9b3e63ec05a933698a7e0c67e2a79ab4cf9d41f993465ff406292fc4. Mar 6 01:36:51.299080 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 01:36:51.334391 kubelet[2549]: E0306 01:36:51.334285 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:36:51.370075 kubelet[2549]: E0306 01:36:51.364440 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:36:51.410123 containerd[1459]: time="2026-03-06T01:36:51.408595097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5858797979-6kp78,Uid:956351bc-0070-433f-97da-835f4c03af41,Namespace:calico-system,Attempt:0,} returns sandbox id \"ba6d583a9b3e63ec05a933698a7e0c67e2a79ab4cf9d41f993465ff406292fc4\"" Mar 6 01:36:51.415149 kubelet[2549]: I0306 01:36:51.414983 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-8w5cr" podStartSLOduration=41.414963296 podStartE2EDuration="41.414963296s" podCreationTimestamp="2026-03-06 01:36:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 01:36:51.384340188 +0000 UTC m=+44.605285465" watchObservedRunningTime="2026-03-06 01:36:51.414963296 +0000 UTC m=+44.635908582" Mar 6 01:36:51.415149 kubelet[2549]: I0306 01:36:51.415128 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-vdjph" podStartSLOduration=41.415121492 podStartE2EDuration="41.415121492s" podCreationTimestamp="2026-03-06 01:36:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 01:36:51.414252774 +0000 UTC m=+44.635198051" watchObservedRunningTime="2026-03-06 01:36:51.415121492 +0000 UTC m=+44.636066788" Mar 6 01:36:51.444379 systemd-networkd[1392]: cali8a032a72858: Gained IPv6LL Mar 6 01:36:51.699469 systemd-networkd[1392]: calidc1cd09f19e: Gained IPv6LL Mar 6 01:36:52.122634 systemd-networkd[1392]: vxlan.calico: Link UP Mar 6 01:36:52.122645 systemd-networkd[1392]: vxlan.calico: Gained carrier Mar 6 01:36:52.151668 systemd-networkd[1392]: cali2d065ff2896: Gained IPv6LL Mar 6 01:36:52.367686 kubelet[2549]: E0306 01:36:52.367650 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:36:52.370123 kubelet[2549]: E0306 01:36:52.368784 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:36:52.473458 systemd-networkd[1392]: calia66cd50184c: Gained IPv6LL Mar 6 01:36:52.959724 containerd[1459]: time="2026-03-06T01:36:52.959663579Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:36:52.961610 containerd[1459]: time="2026-03-06T01:36:52.961504189Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 6 01:36:52.968795 containerd[1459]: time="2026-03-06T01:36:52.968681947Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:36:52.973016 containerd[1459]: time="2026-03-06T01:36:52.972873262Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:36:52.974398 containerd[1459]: time="2026-03-06T01:36:52.974278455Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 2.958454198s" Mar 6 01:36:52.974398 containerd[1459]: time="2026-03-06T01:36:52.974345389Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 6 01:36:52.975802 containerd[1459]: time="2026-03-06T01:36:52.975749627Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 6 01:36:52.981477 containerd[1459]: time="2026-03-06T01:36:52.981425632Z" level=info msg="CreateContainer within sandbox \"9f7f8919af3fb117a3ee7525c8161387c15e6d2874dcd310c97f6073afff886f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 6 01:36:53.005228 containerd[1459]: time="2026-03-06T01:36:53.005174667Z" level=info msg="CreateContainer within sandbox \"9f7f8919af3fb117a3ee7525c8161387c15e6d2874dcd310c97f6073afff886f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e449abb39a6d3d6f913db123b0321ee7e690f5434ae392f347109102f918ec60\"" Mar 6 01:36:53.006983 containerd[1459]: time="2026-03-06T01:36:53.006876876Z" level=info msg="StartContainer for \"e449abb39a6d3d6f913db123b0321ee7e690f5434ae392f347109102f918ec60\"" Mar 6 01:36:53.057194 systemd[1]: Started cri-containerd-e449abb39a6d3d6f913db123b0321ee7e690f5434ae392f347109102f918ec60.scope - libcontainer container e449abb39a6d3d6f913db123b0321ee7e690f5434ae392f347109102f918ec60. Mar 6 01:36:53.112580 containerd[1459]: time="2026-03-06T01:36:53.112456777Z" level=info msg="StartContainer for \"e449abb39a6d3d6f913db123b0321ee7e690f5434ae392f347109102f918ec60\" returns successfully" Mar 6 01:36:53.382781 kubelet[2549]: E0306 01:36:53.382419 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:36:53.382781 kubelet[2549]: E0306 01:36:53.382575 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:36:53.667407 containerd[1459]: time="2026-03-06T01:36:53.667339597Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:36:53.668687 containerd[1459]: time="2026-03-06T01:36:53.668465849Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 6 01:36:53.670140 containerd[1459]: time="2026-03-06T01:36:53.670098431Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:36:53.673012 containerd[1459]: time="2026-03-06T01:36:53.672951429Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:36:53.674080 containerd[1459]: time="2026-03-06T01:36:53.674037847Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 698.227738ms" Mar 6 01:36:53.674142 containerd[1459]: time="2026-03-06T01:36:53.674089985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 6 01:36:53.678076 containerd[1459]: time="2026-03-06T01:36:53.676433003Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 6 01:36:53.680284 containerd[1459]: time="2026-03-06T01:36:53.680150398Z" level=info msg="CreateContainer within sandbox \"7acf0a7a3264dbfc836b446d2324c24e56afb9ef331ad621f7320774565d01e8\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 6 01:36:53.704213 containerd[1459]: time="2026-03-06T01:36:53.704115074Z" level=info msg="CreateContainer within sandbox \"7acf0a7a3264dbfc836b446d2324c24e56afb9ef331ad621f7320774565d01e8\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"e793f4ca93ba8eee78c8b4090f9efe4db8d165de0c9e3141ba55319cdd627229\"" Mar 6 01:36:53.705398 containerd[1459]: time="2026-03-06T01:36:53.705221757Z" level=info msg="StartContainer for \"e793f4ca93ba8eee78c8b4090f9efe4db8d165de0c9e3141ba55319cdd627229\"" Mar 6 01:36:53.748597 systemd-networkd[1392]: vxlan.calico: Gained IPv6LL Mar 6 01:36:53.748671 systemd[1]: Started cri-containerd-e793f4ca93ba8eee78c8b4090f9efe4db8d165de0c9e3141ba55319cdd627229.scope - libcontainer container e793f4ca93ba8eee78c8b4090f9efe4db8d165de0c9e3141ba55319cdd627229. Mar 6 01:36:53.782474 containerd[1459]: time="2026-03-06T01:36:53.781639204Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:36:53.784466 containerd[1459]: time="2026-03-06T01:36:53.784352237Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 6 01:36:53.786726 containerd[1459]: time="2026-03-06T01:36:53.786699034Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 110.212372ms" Mar 6 01:36:53.786812 containerd[1459]: time="2026-03-06T01:36:53.786796267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 6 01:36:53.788977 containerd[1459]: time="2026-03-06T01:36:53.788871961Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 6 01:36:53.794306 containerd[1459]: time="2026-03-06T01:36:53.794227812Z" level=info msg="CreateContainer within sandbox \"944f79b100d18e17fcaed6a23ec72873665544c32f87bf2ba7113611e55d8f2c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 6 01:36:53.818209 containerd[1459]: time="2026-03-06T01:36:53.818132195Z" level=info msg="CreateContainer within sandbox \"944f79b100d18e17fcaed6a23ec72873665544c32f87bf2ba7113611e55d8f2c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a5ce825b41a3fe4e2c22f0b747e0d8586aba06721c2a567c1a89b840ff24451d\"" Mar 6 01:36:53.819880 containerd[1459]: time="2026-03-06T01:36:53.819847468Z" level=info msg="StartContainer for \"e793f4ca93ba8eee78c8b4090f9efe4db8d165de0c9e3141ba55319cdd627229\" returns successfully" Mar 6 01:36:53.821184 containerd[1459]: time="2026-03-06T01:36:53.821131524Z" level=info msg="StartContainer for \"a5ce825b41a3fe4e2c22f0b747e0d8586aba06721c2a567c1a89b840ff24451d\"" Mar 6 01:36:53.871107 systemd[1]: Started cri-containerd-a5ce825b41a3fe4e2c22f0b747e0d8586aba06721c2a567c1a89b840ff24451d.scope - libcontainer container a5ce825b41a3fe4e2c22f0b747e0d8586aba06721c2a567c1a89b840ff24451d. Mar 6 01:36:53.928244 containerd[1459]: time="2026-03-06T01:36:53.926838415Z" level=info msg="StartContainer for \"a5ce825b41a3fe4e2c22f0b747e0d8586aba06721c2a567c1a89b840ff24451d\" returns successfully" Mar 6 01:36:54.408119 kubelet[2549]: I0306 01:36:54.408069 2549 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 6 01:36:54.409036 kubelet[2549]: E0306 01:36:54.408998 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:36:54.429796 kubelet[2549]: I0306 01:36:54.429601 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-6cfdc76bfd-kz2gs" podStartSLOduration=26.461154483 podStartE2EDuration="29.429577262s" podCreationTimestamp="2026-03-06 01:36:25 +0000 UTC" firstStartedPulling="2026-03-06 01:36:50.007175839 +0000 UTC m=+43.228121115" lastFinishedPulling="2026-03-06 01:36:52.975598619 +0000 UTC m=+46.196543894" observedRunningTime="2026-03-06 01:36:53.398981173 +0000 UTC m=+46.619926449" watchObservedRunningTime="2026-03-06 01:36:54.429577262 +0000 UTC m=+47.650522538" Mar 6 01:36:55.569692 kubelet[2549]: I0306 01:36:55.569275 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-6cfdc76bfd-tpll8" podStartSLOduration=27.156992193 podStartE2EDuration="30.569252688s" podCreationTimestamp="2026-03-06 01:36:25 +0000 UTC" firstStartedPulling="2026-03-06 01:36:50.375533651 +0000 UTC m=+43.596478928" lastFinishedPulling="2026-03-06 01:36:53.787794147 +0000 UTC m=+47.008739423" observedRunningTime="2026-03-06 01:36:54.430116854 +0000 UTC m=+47.651062140" watchObservedRunningTime="2026-03-06 01:36:55.569252688 +0000 UTC m=+48.790197984" Mar 6 01:36:55.799423 containerd[1459]: time="2026-03-06T01:36:55.799268944Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:36:55.800882 containerd[1459]: time="2026-03-06T01:36:55.800786386Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 6 01:36:55.802342 containerd[1459]: time="2026-03-06T01:36:55.802278171Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:36:55.825648 containerd[1459]: time="2026-03-06T01:36:55.825353229Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:36:55.826931 containerd[1459]: time="2026-03-06T01:36:55.826767984Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 2.037787589s" Mar 6 01:36:55.827054 containerd[1459]: time="2026-03-06T01:36:55.826941959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 6 01:36:55.828516 containerd[1459]: time="2026-03-06T01:36:55.828448822Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 6 01:36:55.858269 containerd[1459]: time="2026-03-06T01:36:55.858196679Z" level=info msg="CreateContainer within sandbox \"72714953d14d398451d80bc4d52732d53887e907f90fe629a9e98f538c527a2b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 6 01:36:55.880606 containerd[1459]: time="2026-03-06T01:36:55.880457694Z" level=info msg="CreateContainer within sandbox \"72714953d14d398451d80bc4d52732d53887e907f90fe629a9e98f538c527a2b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"52a48705d9f67a4df70b26f98f48612ecde1c03a4b51d0747982441693bd3aa9\"" Mar 6 01:36:55.881739 containerd[1459]: time="2026-03-06T01:36:55.881352619Z" level=info msg="StartContainer for \"52a48705d9f67a4df70b26f98f48612ecde1c03a4b51d0747982441693bd3aa9\"" Mar 6 01:36:55.936150 systemd[1]: Started cri-containerd-52a48705d9f67a4df70b26f98f48612ecde1c03a4b51d0747982441693bd3aa9.scope - libcontainer container 52a48705d9f67a4df70b26f98f48612ecde1c03a4b51d0747982441693bd3aa9. Mar 6 01:36:56.011092 containerd[1459]: time="2026-03-06T01:36:56.010966896Z" level=info msg="StartContainer for \"52a48705d9f67a4df70b26f98f48612ecde1c03a4b51d0747982441693bd3aa9\" returns successfully" Mar 6 01:36:57.528752 kubelet[2549]: I0306 01:36:57.528668 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-85b54d65f4-pdsbh" podStartSLOduration=26.222924634 podStartE2EDuration="31.528639376s" podCreationTimestamp="2026-03-06 01:36:26 +0000 UTC" firstStartedPulling="2026-03-06 01:36:50.522413265 +0000 UTC m=+43.743358541" lastFinishedPulling="2026-03-06 01:36:55.828127986 +0000 UTC m=+49.049073283" observedRunningTime="2026-03-06 01:36:56.431837135 +0000 UTC m=+49.652782421" watchObservedRunningTime="2026-03-06 01:36:57.528639376 +0000 UTC m=+50.749584692" Mar 6 01:36:57.866785 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3413077971.mount: Deactivated successfully. Mar 6 01:36:58.325207 containerd[1459]: time="2026-03-06T01:36:58.325065509Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:36:58.326438 containerd[1459]: time="2026-03-06T01:36:58.326336588Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 6 01:36:58.327658 containerd[1459]: time="2026-03-06T01:36:58.327584951Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:36:58.330688 containerd[1459]: time="2026-03-06T01:36:58.330602863Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:36:58.331650 containerd[1459]: time="2026-03-06T01:36:58.331571246Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 2.503083691s" Mar 6 01:36:58.331650 containerd[1459]: time="2026-03-06T01:36:58.331618665Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 6 01:36:58.334279 containerd[1459]: time="2026-03-06T01:36:58.334228151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 6 01:36:58.338549 containerd[1459]: time="2026-03-06T01:36:58.338511028Z" level=info msg="CreateContainer within sandbox \"a10859b621dda9f1b43935f8d6068f9af8c1c4811c7a8010c7bcd5659c0b2985\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 6 01:36:58.401294 containerd[1459]: time="2026-03-06T01:36:58.401168531Z" level=info msg="CreateContainer within sandbox \"a10859b621dda9f1b43935f8d6068f9af8c1c4811c7a8010c7bcd5659c0b2985\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"e72da28bb258cc9c9d68b265e5a15b891164dd8f215a90e8232f923cae1a01d8\"" Mar 6 01:36:58.401742 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount262517194.mount: Deactivated successfully. Mar 6 01:36:58.402081 containerd[1459]: time="2026-03-06T01:36:58.402027829Z" level=info msg="StartContainer for \"e72da28bb258cc9c9d68b265e5a15b891164dd8f215a90e8232f923cae1a01d8\"" Mar 6 01:36:58.511332 systemd[1]: Started cri-containerd-e72da28bb258cc9c9d68b265e5a15b891164dd8f215a90e8232f923cae1a01d8.scope - libcontainer container e72da28bb258cc9c9d68b265e5a15b891164dd8f215a90e8232f923cae1a01d8. Mar 6 01:36:58.598496 containerd[1459]: time="2026-03-06T01:36:58.598091505Z" level=info msg="StartContainer for \"e72da28bb258cc9c9d68b265e5a15b891164dd8f215a90e8232f923cae1a01d8\" returns successfully" Mar 6 01:36:59.452996 kubelet[2549]: I0306 01:36:59.452683 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-7867t" podStartSLOduration=27.214806719 podStartE2EDuration="34.452658118s" podCreationTimestamp="2026-03-06 01:36:25 +0000 UTC" firstStartedPulling="2026-03-06 01:36:51.095326115 +0000 UTC m=+44.316271401" lastFinishedPulling="2026-03-06 01:36:58.333177524 +0000 UTC m=+51.554122800" observedRunningTime="2026-03-06 01:36:59.451007819 +0000 UTC m=+52.671953105" watchObservedRunningTime="2026-03-06 01:36:59.452658118 +0000 UTC m=+52.673603394" Mar 6 01:36:59.611514 containerd[1459]: time="2026-03-06T01:36:59.611365649Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:36:59.612586 containerd[1459]: time="2026-03-06T01:36:59.612455285Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 6 01:36:59.614579 containerd[1459]: time="2026-03-06T01:36:59.614506092Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:36:59.627036 containerd[1459]: time="2026-03-06T01:36:59.626884056Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:36:59.628658 containerd[1459]: time="2026-03-06T01:36:59.628564223Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.294279957s" Mar 6 01:36:59.628658 containerd[1459]: time="2026-03-06T01:36:59.628633924Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 6 01:36:59.630323 containerd[1459]: time="2026-03-06T01:36:59.630082421Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 6 01:36:59.635210 containerd[1459]: time="2026-03-06T01:36:59.635043328Z" level=info msg="CreateContainer within sandbox \"ba6d583a9b3e63ec05a933698a7e0c67e2a79ab4cf9d41f993465ff406292fc4\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 6 01:36:59.674819 containerd[1459]: time="2026-03-06T01:36:59.674727519Z" level=info msg="CreateContainer within sandbox \"ba6d583a9b3e63ec05a933698a7e0c67e2a79ab4cf9d41f993465ff406292fc4\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"a89fe18ba678e7651a396ab2736e615987ab269f8aa9c054acee880201aa641d\"" Mar 6 01:36:59.675872 containerd[1459]: time="2026-03-06T01:36:59.675607499Z" level=info msg="StartContainer for \"a89fe18ba678e7651a396ab2736e615987ab269f8aa9c054acee880201aa641d\"" Mar 6 01:36:59.724151 systemd[1]: Started cri-containerd-a89fe18ba678e7651a396ab2736e615987ab269f8aa9c054acee880201aa641d.scope - libcontainer container a89fe18ba678e7651a396ab2736e615987ab269f8aa9c054acee880201aa641d. Mar 6 01:36:59.795841 containerd[1459]: time="2026-03-06T01:36:59.795774573Z" level=info msg="StartContainer for \"a89fe18ba678e7651a396ab2736e615987ab269f8aa9c054acee880201aa641d\" returns successfully" Mar 6 01:37:00.542573 containerd[1459]: time="2026-03-06T01:37:00.542335602Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:37:00.546544 containerd[1459]: time="2026-03-06T01:37:00.546188804Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 6 01:37:00.551123 containerd[1459]: time="2026-03-06T01:37:00.551083158Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:37:00.562345 containerd[1459]: time="2026-03-06T01:37:00.560821026Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:37:00.562345 containerd[1459]: time="2026-03-06T01:37:00.562047757Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 931.867504ms" Mar 6 01:37:00.562345 containerd[1459]: time="2026-03-06T01:37:00.562092822Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 6 01:37:00.566868 containerd[1459]: time="2026-03-06T01:37:00.566834265Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 6 01:37:00.577342 containerd[1459]: time="2026-03-06T01:37:00.577245306Z" level=info msg="CreateContainer within sandbox \"7acf0a7a3264dbfc836b446d2324c24e56afb9ef331ad621f7320774565d01e8\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 6 01:37:00.614191 containerd[1459]: time="2026-03-06T01:37:00.614091627Z" level=info msg="CreateContainer within sandbox \"7acf0a7a3264dbfc836b446d2324c24e56afb9ef331ad621f7320774565d01e8\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a04eb886364c92e65d1f8e5e75286d1858a4cae56bddb8d18f475adddb68efec\"" Mar 6 01:37:00.616203 containerd[1459]: time="2026-03-06T01:37:00.616142600Z" level=info msg="StartContainer for \"a04eb886364c92e65d1f8e5e75286d1858a4cae56bddb8d18f475adddb68efec\"" Mar 6 01:37:00.709209 systemd[1]: Started cri-containerd-a04eb886364c92e65d1f8e5e75286d1858a4cae56bddb8d18f475adddb68efec.scope - libcontainer container a04eb886364c92e65d1f8e5e75286d1858a4cae56bddb8d18f475adddb68efec. Mar 6 01:37:00.803069 containerd[1459]: time="2026-03-06T01:37:00.801952847Z" level=info msg="StartContainer for \"a04eb886364c92e65d1f8e5e75286d1858a4cae56bddb8d18f475adddb68efec\" returns successfully" Mar 6 01:37:01.418091 kubelet[2549]: I0306 01:37:01.416382 2549 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 6 01:37:01.429576 kubelet[2549]: I0306 01:37:01.429487 2549 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 6 01:37:01.662763 systemd[1]: run-containerd-runc-k8s.io-e72da28bb258cc9c9d68b265e5a15b891164dd8f215a90e8232f923cae1a01d8-runc.IjvpBb.mount: Deactivated successfully. Mar 6 01:37:01.687845 kubelet[2549]: I0306 01:37:01.686830 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-kfs88" podStartSLOduration=25.190016508 podStartE2EDuration="35.686807707s" podCreationTimestamp="2026-03-06 01:36:26 +0000 UTC" firstStartedPulling="2026-03-06 01:36:50.067642728 +0000 UTC m=+43.288588004" lastFinishedPulling="2026-03-06 01:37:00.564433917 +0000 UTC m=+53.785379203" observedRunningTime="2026-03-06 01:37:01.473034149 +0000 UTC m=+54.693979424" watchObservedRunningTime="2026-03-06 01:37:01.686807707 +0000 UTC m=+54.907753003" Mar 6 01:37:01.906616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2111491612.mount: Deactivated successfully. Mar 6 01:37:01.932571 containerd[1459]: time="2026-03-06T01:37:01.932312943Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:37:01.933986 containerd[1459]: time="2026-03-06T01:37:01.933820755Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 6 01:37:01.936058 containerd[1459]: time="2026-03-06T01:37:01.935972483Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:37:01.941287 containerd[1459]: time="2026-03-06T01:37:01.940844924Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:37:01.942640 containerd[1459]: time="2026-03-06T01:37:01.942503499Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.375123544s" Mar 6 01:37:01.942743 containerd[1459]: time="2026-03-06T01:37:01.942670903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 6 01:37:01.951873 containerd[1459]: time="2026-03-06T01:37:01.951742538Z" level=info msg="CreateContainer within sandbox \"ba6d583a9b3e63ec05a933698a7e0c67e2a79ab4cf9d41f993465ff406292fc4\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 6 01:37:01.984663 containerd[1459]: time="2026-03-06T01:37:01.984572102Z" level=info msg="CreateContainer within sandbox \"ba6d583a9b3e63ec05a933698a7e0c67e2a79ab4cf9d41f993465ff406292fc4\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"878453a95b7111ec92709565d11eda9a5b1bc5bf160b03db3eea74e1d64a7aff\"" Mar 6 01:37:01.985386 containerd[1459]: time="2026-03-06T01:37:01.985357001Z" level=info msg="StartContainer for \"878453a95b7111ec92709565d11eda9a5b1bc5bf160b03db3eea74e1d64a7aff\"" Mar 6 01:37:02.025136 systemd[1]: Started cri-containerd-878453a95b7111ec92709565d11eda9a5b1bc5bf160b03db3eea74e1d64a7aff.scope - libcontainer container 878453a95b7111ec92709565d11eda9a5b1bc5bf160b03db3eea74e1d64a7aff. Mar 6 01:37:02.099207 containerd[1459]: time="2026-03-06T01:37:02.099140193Z" level=info msg="StartContainer for \"878453a95b7111ec92709565d11eda9a5b1bc5bf160b03db3eea74e1d64a7aff\" returns successfully" Mar 6 01:37:02.483946 kubelet[2549]: I0306 01:37:02.482648 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5858797979-6kp78" podStartSLOduration=2.9622312920000002 podStartE2EDuration="13.482624707s" podCreationTimestamp="2026-03-06 01:36:49 +0000 UTC" firstStartedPulling="2026-03-06 01:36:51.425167764 +0000 UTC m=+44.646113041" lastFinishedPulling="2026-03-06 01:37:01.945561179 +0000 UTC m=+55.166506456" observedRunningTime="2026-03-06 01:37:02.480324596 +0000 UTC m=+55.701270022" watchObservedRunningTime="2026-03-06 01:37:02.482624707 +0000 UTC m=+55.703570012" Mar 6 01:37:06.943745 containerd[1459]: time="2026-03-06T01:37:06.942774561Z" level=info msg="StopPodSandbox for \"82faa401690b9b5be3937546151e2bb04755045086d77b64e55ea069508178b5\"" Mar 6 01:37:07.187820 containerd[1459]: 2026-03-06 01:37:07.067 [WARNING][5391] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="82faa401690b9b5be3937546151e2bb04755045086d77b64e55ea069508178b5" WorkloadEndpoint="localhost-k8s-whisker--6c5f76998--vkbq8-eth0" Mar 6 01:37:07.187820 containerd[1459]: 2026-03-06 01:37:07.068 [INFO][5391] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="82faa401690b9b5be3937546151e2bb04755045086d77b64e55ea069508178b5" Mar 6 01:37:07.187820 containerd[1459]: 2026-03-06 01:37:07.068 [INFO][5391] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="82faa401690b9b5be3937546151e2bb04755045086d77b64e55ea069508178b5" iface="eth0" netns="" Mar 6 01:37:07.187820 containerd[1459]: 2026-03-06 01:37:07.068 [INFO][5391] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="82faa401690b9b5be3937546151e2bb04755045086d77b64e55ea069508178b5" Mar 6 01:37:07.187820 containerd[1459]: 2026-03-06 01:37:07.068 [INFO][5391] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="82faa401690b9b5be3937546151e2bb04755045086d77b64e55ea069508178b5" Mar 6 01:37:07.187820 containerd[1459]: 2026-03-06 01:37:07.168 [INFO][5400] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="82faa401690b9b5be3937546151e2bb04755045086d77b64e55ea069508178b5" HandleID="k8s-pod-network.82faa401690b9b5be3937546151e2bb04755045086d77b64e55ea069508178b5" Workload="localhost-k8s-whisker--6c5f76998--vkbq8-eth0" Mar 6 01:37:07.187820 containerd[1459]: 2026-03-06 01:37:07.169 [INFO][5400] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:37:07.187820 containerd[1459]: 2026-03-06 01:37:07.169 [INFO][5400] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:37:07.187820 containerd[1459]: 2026-03-06 01:37:07.178 [WARNING][5400] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="82faa401690b9b5be3937546151e2bb04755045086d77b64e55ea069508178b5" HandleID="k8s-pod-network.82faa401690b9b5be3937546151e2bb04755045086d77b64e55ea069508178b5" Workload="localhost-k8s-whisker--6c5f76998--vkbq8-eth0" Mar 6 01:37:07.187820 containerd[1459]: 2026-03-06 01:37:07.178 [INFO][5400] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="82faa401690b9b5be3937546151e2bb04755045086d77b64e55ea069508178b5" HandleID="k8s-pod-network.82faa401690b9b5be3937546151e2bb04755045086d77b64e55ea069508178b5" Workload="localhost-k8s-whisker--6c5f76998--vkbq8-eth0" Mar 6 01:37:07.187820 containerd[1459]: 2026-03-06 01:37:07.180 [INFO][5400] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:37:07.187820 containerd[1459]: 2026-03-06 01:37:07.184 [INFO][5391] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="82faa401690b9b5be3937546151e2bb04755045086d77b64e55ea069508178b5" Mar 6 01:37:07.194496 containerd[1459]: time="2026-03-06T01:37:07.194320760Z" level=info msg="TearDown network for sandbox \"82faa401690b9b5be3937546151e2bb04755045086d77b64e55ea069508178b5\" successfully" Mar 6 01:37:07.194496 containerd[1459]: time="2026-03-06T01:37:07.194387584Z" level=info msg="StopPodSandbox for \"82faa401690b9b5be3937546151e2bb04755045086d77b64e55ea069508178b5\" returns successfully" Mar 6 01:37:07.221236 containerd[1459]: time="2026-03-06T01:37:07.221137847Z" level=info msg="RemovePodSandbox for \"82faa401690b9b5be3937546151e2bb04755045086d77b64e55ea069508178b5\"" Mar 6 01:37:07.223272 containerd[1459]: time="2026-03-06T01:37:07.223204390Z" level=info msg="Forcibly stopping sandbox \"82faa401690b9b5be3937546151e2bb04755045086d77b64e55ea069508178b5\"" Mar 6 01:37:07.358277 containerd[1459]: 2026-03-06 01:37:07.302 [WARNING][5419] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="82faa401690b9b5be3937546151e2bb04755045086d77b64e55ea069508178b5" WorkloadEndpoint="localhost-k8s-whisker--6c5f76998--vkbq8-eth0" Mar 6 01:37:07.358277 containerd[1459]: 2026-03-06 01:37:07.303 [INFO][5419] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="82faa401690b9b5be3937546151e2bb04755045086d77b64e55ea069508178b5" Mar 6 01:37:07.358277 containerd[1459]: 2026-03-06 01:37:07.303 [INFO][5419] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="82faa401690b9b5be3937546151e2bb04755045086d77b64e55ea069508178b5" iface="eth0" netns="" Mar 6 01:37:07.358277 containerd[1459]: 2026-03-06 01:37:07.303 [INFO][5419] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="82faa401690b9b5be3937546151e2bb04755045086d77b64e55ea069508178b5" Mar 6 01:37:07.358277 containerd[1459]: 2026-03-06 01:37:07.303 [INFO][5419] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="82faa401690b9b5be3937546151e2bb04755045086d77b64e55ea069508178b5" Mar 6 01:37:07.358277 containerd[1459]: 2026-03-06 01:37:07.338 [INFO][5427] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="82faa401690b9b5be3937546151e2bb04755045086d77b64e55ea069508178b5" HandleID="k8s-pod-network.82faa401690b9b5be3937546151e2bb04755045086d77b64e55ea069508178b5" Workload="localhost-k8s-whisker--6c5f76998--vkbq8-eth0" Mar 6 01:37:07.358277 containerd[1459]: 2026-03-06 01:37:07.338 [INFO][5427] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:37:07.358277 containerd[1459]: 2026-03-06 01:37:07.339 [INFO][5427] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:37:07.358277 containerd[1459]: 2026-03-06 01:37:07.348 [WARNING][5427] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="82faa401690b9b5be3937546151e2bb04755045086d77b64e55ea069508178b5" HandleID="k8s-pod-network.82faa401690b9b5be3937546151e2bb04755045086d77b64e55ea069508178b5" Workload="localhost-k8s-whisker--6c5f76998--vkbq8-eth0" Mar 6 01:37:07.358277 containerd[1459]: 2026-03-06 01:37:07.348 [INFO][5427] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="82faa401690b9b5be3937546151e2bb04755045086d77b64e55ea069508178b5" HandleID="k8s-pod-network.82faa401690b9b5be3937546151e2bb04755045086d77b64e55ea069508178b5" Workload="localhost-k8s-whisker--6c5f76998--vkbq8-eth0" Mar 6 01:37:07.358277 containerd[1459]: 2026-03-06 01:37:07.351 [INFO][5427] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:37:07.358277 containerd[1459]: 2026-03-06 01:37:07.355 [INFO][5419] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="82faa401690b9b5be3937546151e2bb04755045086d77b64e55ea069508178b5" Mar 6 01:37:07.358725 containerd[1459]: time="2026-03-06T01:37:07.358330987Z" level=info msg="TearDown network for sandbox \"82faa401690b9b5be3937546151e2bb04755045086d77b64e55ea069508178b5\" successfully" Mar 6 01:37:07.395202 containerd[1459]: time="2026-03-06T01:37:07.395108415Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"82faa401690b9b5be3937546151e2bb04755045086d77b64e55ea069508178b5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 6 01:37:07.395344 containerd[1459]: time="2026-03-06T01:37:07.395253066Z" level=info msg="RemovePodSandbox \"82faa401690b9b5be3937546151e2bb04755045086d77b64e55ea069508178b5\" returns successfully" Mar 6 01:37:07.401669 containerd[1459]: time="2026-03-06T01:37:07.401556244Z" level=info msg="StopPodSandbox for \"7b778730280a4e6a4013c014a3f3a55c5aea4fbfc9d921fc35ae37cf2b1628e3\"" Mar 6 01:37:07.521857 containerd[1459]: 2026-03-06 01:37:07.475 [WARNING][5445] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7b778730280a4e6a4013c014a3f3a55c5aea4fbfc9d921fc35ae37cf2b1628e3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--8w5cr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"259cb996-fbdd-4a81-b770-165dc5d9d831", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 36, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d8d3a3c9253c3302129e4153ed91c9e9292970898a5197debda4740806e12692", Pod:"coredns-674b8bbfcf-8w5cr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidc1cd09f19e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:37:07.521857 containerd[1459]: 2026-03-06 01:37:07.475 [INFO][5445] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7b778730280a4e6a4013c014a3f3a55c5aea4fbfc9d921fc35ae37cf2b1628e3" Mar 6 01:37:07.521857 containerd[1459]: 2026-03-06 01:37:07.476 [INFO][5445] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7b778730280a4e6a4013c014a3f3a55c5aea4fbfc9d921fc35ae37cf2b1628e3" iface="eth0" netns="" Mar 6 01:37:07.521857 containerd[1459]: 2026-03-06 01:37:07.476 [INFO][5445] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7b778730280a4e6a4013c014a3f3a55c5aea4fbfc9d921fc35ae37cf2b1628e3" Mar 6 01:37:07.521857 containerd[1459]: 2026-03-06 01:37:07.476 [INFO][5445] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7b778730280a4e6a4013c014a3f3a55c5aea4fbfc9d921fc35ae37cf2b1628e3" Mar 6 01:37:07.521857 containerd[1459]: 2026-03-06 01:37:07.503 [INFO][5453] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7b778730280a4e6a4013c014a3f3a55c5aea4fbfc9d921fc35ae37cf2b1628e3" HandleID="k8s-pod-network.7b778730280a4e6a4013c014a3f3a55c5aea4fbfc9d921fc35ae37cf2b1628e3" Workload="localhost-k8s-coredns--674b8bbfcf--8w5cr-eth0" Mar 6 01:37:07.521857 containerd[1459]: 2026-03-06 01:37:07.504 [INFO][5453] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:37:07.521857 containerd[1459]: 2026-03-06 01:37:07.504 [INFO][5453] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:37:07.521857 containerd[1459]: 2026-03-06 01:37:07.512 [WARNING][5453] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7b778730280a4e6a4013c014a3f3a55c5aea4fbfc9d921fc35ae37cf2b1628e3" HandleID="k8s-pod-network.7b778730280a4e6a4013c014a3f3a55c5aea4fbfc9d921fc35ae37cf2b1628e3" Workload="localhost-k8s-coredns--674b8bbfcf--8w5cr-eth0" Mar 6 01:37:07.521857 containerd[1459]: 2026-03-06 01:37:07.512 [INFO][5453] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7b778730280a4e6a4013c014a3f3a55c5aea4fbfc9d921fc35ae37cf2b1628e3" HandleID="k8s-pod-network.7b778730280a4e6a4013c014a3f3a55c5aea4fbfc9d921fc35ae37cf2b1628e3" Workload="localhost-k8s-coredns--674b8bbfcf--8w5cr-eth0" Mar 6 01:37:07.521857 containerd[1459]: 2026-03-06 01:37:07.516 [INFO][5453] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:37:07.521857 containerd[1459]: 2026-03-06 01:37:07.518 [INFO][5445] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7b778730280a4e6a4013c014a3f3a55c5aea4fbfc9d921fc35ae37cf2b1628e3" Mar 6 01:37:07.521857 containerd[1459]: time="2026-03-06T01:37:07.521810604Z" level=info msg="TearDown network for sandbox \"7b778730280a4e6a4013c014a3f3a55c5aea4fbfc9d921fc35ae37cf2b1628e3\" successfully" Mar 6 01:37:07.521857 containerd[1459]: time="2026-03-06T01:37:07.521853533Z" level=info msg="StopPodSandbox for \"7b778730280a4e6a4013c014a3f3a55c5aea4fbfc9d921fc35ae37cf2b1628e3\" returns successfully" Mar 6 01:37:07.523525 containerd[1459]: time="2026-03-06T01:37:07.523088315Z" level=info msg="RemovePodSandbox for \"7b778730280a4e6a4013c014a3f3a55c5aea4fbfc9d921fc35ae37cf2b1628e3\"" Mar 6 01:37:07.523525 containerd[1459]: time="2026-03-06T01:37:07.523133930Z" level=info msg="Forcibly stopping sandbox \"7b778730280a4e6a4013c014a3f3a55c5aea4fbfc9d921fc35ae37cf2b1628e3\"" Mar 6 01:37:07.632469 containerd[1459]: 2026-03-06 01:37:07.573 [WARNING][5471] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7b778730280a4e6a4013c014a3f3a55c5aea4fbfc9d921fc35ae37cf2b1628e3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--8w5cr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"259cb996-fbdd-4a81-b770-165dc5d9d831", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 36, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d8d3a3c9253c3302129e4153ed91c9e9292970898a5197debda4740806e12692", Pod:"coredns-674b8bbfcf-8w5cr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidc1cd09f19e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:37:07.632469 containerd[1459]: 2026-03-06 01:37:07.573 [INFO][5471] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7b778730280a4e6a4013c014a3f3a55c5aea4fbfc9d921fc35ae37cf2b1628e3" Mar 6 01:37:07.632469 containerd[1459]: 2026-03-06 01:37:07.573 [INFO][5471] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7b778730280a4e6a4013c014a3f3a55c5aea4fbfc9d921fc35ae37cf2b1628e3" iface="eth0" netns="" Mar 6 01:37:07.632469 containerd[1459]: 2026-03-06 01:37:07.573 [INFO][5471] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7b778730280a4e6a4013c014a3f3a55c5aea4fbfc9d921fc35ae37cf2b1628e3" Mar 6 01:37:07.632469 containerd[1459]: 2026-03-06 01:37:07.573 [INFO][5471] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7b778730280a4e6a4013c014a3f3a55c5aea4fbfc9d921fc35ae37cf2b1628e3" Mar 6 01:37:07.632469 containerd[1459]: 2026-03-06 01:37:07.615 [INFO][5479] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7b778730280a4e6a4013c014a3f3a55c5aea4fbfc9d921fc35ae37cf2b1628e3" HandleID="k8s-pod-network.7b778730280a4e6a4013c014a3f3a55c5aea4fbfc9d921fc35ae37cf2b1628e3" Workload="localhost-k8s-coredns--674b8bbfcf--8w5cr-eth0" Mar 6 01:37:07.632469 containerd[1459]: 2026-03-06 01:37:07.615 [INFO][5479] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:37:07.632469 containerd[1459]: 2026-03-06 01:37:07.615 [INFO][5479] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:37:07.632469 containerd[1459]: 2026-03-06 01:37:07.622 [WARNING][5479] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7b778730280a4e6a4013c014a3f3a55c5aea4fbfc9d921fc35ae37cf2b1628e3" HandleID="k8s-pod-network.7b778730280a4e6a4013c014a3f3a55c5aea4fbfc9d921fc35ae37cf2b1628e3" Workload="localhost-k8s-coredns--674b8bbfcf--8w5cr-eth0" Mar 6 01:37:07.632469 containerd[1459]: 2026-03-06 01:37:07.623 [INFO][5479] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7b778730280a4e6a4013c014a3f3a55c5aea4fbfc9d921fc35ae37cf2b1628e3" HandleID="k8s-pod-network.7b778730280a4e6a4013c014a3f3a55c5aea4fbfc9d921fc35ae37cf2b1628e3" Workload="localhost-k8s-coredns--674b8bbfcf--8w5cr-eth0" Mar 6 01:37:07.632469 containerd[1459]: 2026-03-06 01:37:07.625 [INFO][5479] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:37:07.632469 containerd[1459]: 2026-03-06 01:37:07.628 [INFO][5471] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7b778730280a4e6a4013c014a3f3a55c5aea4fbfc9d921fc35ae37cf2b1628e3" Mar 6 01:37:07.632469 containerd[1459]: time="2026-03-06T01:37:07.632423125Z" level=info msg="TearDown network for sandbox \"7b778730280a4e6a4013c014a3f3a55c5aea4fbfc9d921fc35ae37cf2b1628e3\" successfully" Mar 6 01:37:07.637771 containerd[1459]: time="2026-03-06T01:37:07.637681712Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7b778730280a4e6a4013c014a3f3a55c5aea4fbfc9d921fc35ae37cf2b1628e3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 6 01:37:07.637850 containerd[1459]: time="2026-03-06T01:37:07.637804362Z" level=info msg="RemovePodSandbox \"7b778730280a4e6a4013c014a3f3a55c5aea4fbfc9d921fc35ae37cf2b1628e3\" returns successfully" Mar 6 01:37:07.638590 containerd[1459]: time="2026-03-06T01:37:07.638540558Z" level=info msg="StopPodSandbox for \"61ef3731639e921884907ff604386cbe2977a828fd679d22edef74560844ed8b\"" Mar 6 01:37:07.774095 containerd[1459]: 2026-03-06 01:37:07.701 [WARNING][5497] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="61ef3731639e921884907ff604386cbe2977a828fd679d22edef74560844ed8b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--7867t-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"9f2da77b-8614-4197-bbb1-1398be46188f", ResourceVersion:"1143", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 36, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a10859b621dda9f1b43935f8d6068f9af8c1c4811c7a8010c7bcd5659c0b2985", Pod:"goldmane-5b85766d88-7867t", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia66cd50184c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:37:07.774095 containerd[1459]: 2026-03-06 01:37:07.701 [INFO][5497] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="61ef3731639e921884907ff604386cbe2977a828fd679d22edef74560844ed8b" Mar 6 01:37:07.774095 containerd[1459]: 2026-03-06 01:37:07.701 [INFO][5497] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="61ef3731639e921884907ff604386cbe2977a828fd679d22edef74560844ed8b" iface="eth0" netns="" Mar 6 01:37:07.774095 containerd[1459]: 2026-03-06 01:37:07.702 [INFO][5497] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="61ef3731639e921884907ff604386cbe2977a828fd679d22edef74560844ed8b" Mar 6 01:37:07.774095 containerd[1459]: 2026-03-06 01:37:07.702 [INFO][5497] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="61ef3731639e921884907ff604386cbe2977a828fd679d22edef74560844ed8b" Mar 6 01:37:07.774095 containerd[1459]: 2026-03-06 01:37:07.739 [INFO][5505] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="61ef3731639e921884907ff604386cbe2977a828fd679d22edef74560844ed8b" HandleID="k8s-pod-network.61ef3731639e921884907ff604386cbe2977a828fd679d22edef74560844ed8b" Workload="localhost-k8s-goldmane--5b85766d88--7867t-eth0" Mar 6 01:37:07.774095 containerd[1459]: 2026-03-06 01:37:07.739 [INFO][5505] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:37:07.774095 containerd[1459]: 2026-03-06 01:37:07.739 [INFO][5505] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:37:07.774095 containerd[1459]: 2026-03-06 01:37:07.764 [WARNING][5505] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="61ef3731639e921884907ff604386cbe2977a828fd679d22edef74560844ed8b" HandleID="k8s-pod-network.61ef3731639e921884907ff604386cbe2977a828fd679d22edef74560844ed8b" Workload="localhost-k8s-goldmane--5b85766d88--7867t-eth0" Mar 6 01:37:07.774095 containerd[1459]: 2026-03-06 01:37:07.764 [INFO][5505] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="61ef3731639e921884907ff604386cbe2977a828fd679d22edef74560844ed8b" HandleID="k8s-pod-network.61ef3731639e921884907ff604386cbe2977a828fd679d22edef74560844ed8b" Workload="localhost-k8s-goldmane--5b85766d88--7867t-eth0" Mar 6 01:37:07.774095 containerd[1459]: 2026-03-06 01:37:07.766 [INFO][5505] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:37:07.774095 containerd[1459]: 2026-03-06 01:37:07.770 [INFO][5497] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="61ef3731639e921884907ff604386cbe2977a828fd679d22edef74560844ed8b" Mar 6 01:37:07.774095 containerd[1459]: time="2026-03-06T01:37:07.774016458Z" level=info msg="TearDown network for sandbox \"61ef3731639e921884907ff604386cbe2977a828fd679d22edef74560844ed8b\" successfully" Mar 6 01:37:07.774095 containerd[1459]: time="2026-03-06T01:37:07.774044240Z" level=info msg="StopPodSandbox for \"61ef3731639e921884907ff604386cbe2977a828fd679d22edef74560844ed8b\" returns successfully" Mar 6 01:37:07.774806 containerd[1459]: time="2026-03-06T01:37:07.774691774Z" level=info msg="RemovePodSandbox for \"61ef3731639e921884907ff604386cbe2977a828fd679d22edef74560844ed8b\"" Mar 6 01:37:07.774806 containerd[1459]: time="2026-03-06T01:37:07.774720267Z" level=info msg="Forcibly stopping sandbox \"61ef3731639e921884907ff604386cbe2977a828fd679d22edef74560844ed8b\"" Mar 6 01:37:07.899098 containerd[1459]: 2026-03-06 01:37:07.836 [WARNING][5524] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="61ef3731639e921884907ff604386cbe2977a828fd679d22edef74560844ed8b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--7867t-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"9f2da77b-8614-4197-bbb1-1398be46188f", ResourceVersion:"1143", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 36, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a10859b621dda9f1b43935f8d6068f9af8c1c4811c7a8010c7bcd5659c0b2985", Pod:"goldmane-5b85766d88-7867t", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia66cd50184c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:37:07.899098 containerd[1459]: 2026-03-06 01:37:07.836 [INFO][5524] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="61ef3731639e921884907ff604386cbe2977a828fd679d22edef74560844ed8b" Mar 6 01:37:07.899098 containerd[1459]: 2026-03-06 01:37:07.836 [INFO][5524] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="61ef3731639e921884907ff604386cbe2977a828fd679d22edef74560844ed8b" iface="eth0" netns="" Mar 6 01:37:07.899098 containerd[1459]: 2026-03-06 01:37:07.836 [INFO][5524] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="61ef3731639e921884907ff604386cbe2977a828fd679d22edef74560844ed8b" Mar 6 01:37:07.899098 containerd[1459]: 2026-03-06 01:37:07.836 [INFO][5524] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="61ef3731639e921884907ff604386cbe2977a828fd679d22edef74560844ed8b" Mar 6 01:37:07.899098 containerd[1459]: 2026-03-06 01:37:07.881 [INFO][5532] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="61ef3731639e921884907ff604386cbe2977a828fd679d22edef74560844ed8b" HandleID="k8s-pod-network.61ef3731639e921884907ff604386cbe2977a828fd679d22edef74560844ed8b" Workload="localhost-k8s-goldmane--5b85766d88--7867t-eth0" Mar 6 01:37:07.899098 containerd[1459]: 2026-03-06 01:37:07.881 [INFO][5532] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:37:07.899098 containerd[1459]: 2026-03-06 01:37:07.881 [INFO][5532] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:37:07.899098 containerd[1459]: 2026-03-06 01:37:07.889 [WARNING][5532] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="61ef3731639e921884907ff604386cbe2977a828fd679d22edef74560844ed8b" HandleID="k8s-pod-network.61ef3731639e921884907ff604386cbe2977a828fd679d22edef74560844ed8b" Workload="localhost-k8s-goldmane--5b85766d88--7867t-eth0" Mar 6 01:37:07.899098 containerd[1459]: 2026-03-06 01:37:07.890 [INFO][5532] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="61ef3731639e921884907ff604386cbe2977a828fd679d22edef74560844ed8b" HandleID="k8s-pod-network.61ef3731639e921884907ff604386cbe2977a828fd679d22edef74560844ed8b" Workload="localhost-k8s-goldmane--5b85766d88--7867t-eth0" Mar 6 01:37:07.899098 containerd[1459]: 2026-03-06 01:37:07.892 [INFO][5532] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:37:07.899098 containerd[1459]: 2026-03-06 01:37:07.895 [INFO][5524] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="61ef3731639e921884907ff604386cbe2977a828fd679d22edef74560844ed8b" Mar 6 01:37:07.899809 containerd[1459]: time="2026-03-06T01:37:07.899129335Z" level=info msg="TearDown network for sandbox \"61ef3731639e921884907ff604386cbe2977a828fd679d22edef74560844ed8b\" successfully" Mar 6 01:37:07.907335 containerd[1459]: time="2026-03-06T01:37:07.907261937Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"61ef3731639e921884907ff604386cbe2977a828fd679d22edef74560844ed8b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 6 01:37:07.907470 containerd[1459]: time="2026-03-06T01:37:07.907354079Z" level=info msg="RemovePodSandbox \"61ef3731639e921884907ff604386cbe2977a828fd679d22edef74560844ed8b\" returns successfully" Mar 6 01:37:07.908616 containerd[1459]: time="2026-03-06T01:37:07.908177089Z" level=info msg="StopPodSandbox for \"359ff3a5f6de42e9f4238ee55feeefe2f836f1d92e0fe0163218c8fccdc4b0d6\"" Mar 6 01:37:08.021674 containerd[1459]: 2026-03-06 01:37:07.965 [WARNING][5549] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="359ff3a5f6de42e9f4238ee55feeefe2f836f1d92e0fe0163218c8fccdc4b0d6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--vdjph-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8a67ece0-9c61-4759-9222-15c2c383bab1", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 36, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3c96c60c2fa427cf60a7b7b561d0474430c76a019f252efd4a071fbc2519c838", Pod:"coredns-674b8bbfcf-vdjph", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali417624052ad", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:37:08.021674 containerd[1459]: 2026-03-06 01:37:07.966 [INFO][5549] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="359ff3a5f6de42e9f4238ee55feeefe2f836f1d92e0fe0163218c8fccdc4b0d6" Mar 6 01:37:08.021674 containerd[1459]: 2026-03-06 01:37:07.966 [INFO][5549] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="359ff3a5f6de42e9f4238ee55feeefe2f836f1d92e0fe0163218c8fccdc4b0d6" iface="eth0" netns="" Mar 6 01:37:08.021674 containerd[1459]: 2026-03-06 01:37:07.966 [INFO][5549] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="359ff3a5f6de42e9f4238ee55feeefe2f836f1d92e0fe0163218c8fccdc4b0d6" Mar 6 01:37:08.021674 containerd[1459]: 2026-03-06 01:37:07.966 [INFO][5549] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="359ff3a5f6de42e9f4238ee55feeefe2f836f1d92e0fe0163218c8fccdc4b0d6" Mar 6 01:37:08.021674 containerd[1459]: 2026-03-06 01:37:08.006 [INFO][5558] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="359ff3a5f6de42e9f4238ee55feeefe2f836f1d92e0fe0163218c8fccdc4b0d6" HandleID="k8s-pod-network.359ff3a5f6de42e9f4238ee55feeefe2f836f1d92e0fe0163218c8fccdc4b0d6" Workload="localhost-k8s-coredns--674b8bbfcf--vdjph-eth0" Mar 6 01:37:08.021674 containerd[1459]: 2026-03-06 01:37:08.007 [INFO][5558] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:37:08.021674 containerd[1459]: 2026-03-06 01:37:08.007 [INFO][5558] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:37:08.021674 containerd[1459]: 2026-03-06 01:37:08.013 [WARNING][5558] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="359ff3a5f6de42e9f4238ee55feeefe2f836f1d92e0fe0163218c8fccdc4b0d6" HandleID="k8s-pod-network.359ff3a5f6de42e9f4238ee55feeefe2f836f1d92e0fe0163218c8fccdc4b0d6" Workload="localhost-k8s-coredns--674b8bbfcf--vdjph-eth0" Mar 6 01:37:08.021674 containerd[1459]: 2026-03-06 01:37:08.013 [INFO][5558] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="359ff3a5f6de42e9f4238ee55feeefe2f836f1d92e0fe0163218c8fccdc4b0d6" HandleID="k8s-pod-network.359ff3a5f6de42e9f4238ee55feeefe2f836f1d92e0fe0163218c8fccdc4b0d6" Workload="localhost-k8s-coredns--674b8bbfcf--vdjph-eth0" Mar 6 01:37:08.021674 containerd[1459]: 2026-03-06 01:37:08.016 [INFO][5558] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:37:08.021674 containerd[1459]: 2026-03-06 01:37:08.018 [INFO][5549] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="359ff3a5f6de42e9f4238ee55feeefe2f836f1d92e0fe0163218c8fccdc4b0d6" Mar 6 01:37:08.022666 containerd[1459]: time="2026-03-06T01:37:08.021743803Z" level=info msg="TearDown network for sandbox \"359ff3a5f6de42e9f4238ee55feeefe2f836f1d92e0fe0163218c8fccdc4b0d6\" successfully" Mar 6 01:37:08.022666 containerd[1459]: time="2026-03-06T01:37:08.021778197Z" level=info msg="StopPodSandbox for \"359ff3a5f6de42e9f4238ee55feeefe2f836f1d92e0fe0163218c8fccdc4b0d6\" returns successfully" Mar 6 01:37:08.022666 containerd[1459]: time="2026-03-06T01:37:08.022557717Z" level=info msg="RemovePodSandbox for \"359ff3a5f6de42e9f4238ee55feeefe2f836f1d92e0fe0163218c8fccdc4b0d6\"" Mar 6 01:37:08.022666 containerd[1459]: time="2026-03-06T01:37:08.022601218Z" level=info msg="Forcibly stopping sandbox \"359ff3a5f6de42e9f4238ee55feeefe2f836f1d92e0fe0163218c8fccdc4b0d6\"" Mar 6 01:37:08.135645 containerd[1459]: 2026-03-06 01:37:08.085 [WARNING][5576] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="359ff3a5f6de42e9f4238ee55feeefe2f836f1d92e0fe0163218c8fccdc4b0d6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--vdjph-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8a67ece0-9c61-4759-9222-15c2c383bab1", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 36, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3c96c60c2fa427cf60a7b7b561d0474430c76a019f252efd4a071fbc2519c838", Pod:"coredns-674b8bbfcf-vdjph", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali417624052ad", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:37:08.135645 containerd[1459]: 2026-03-06 01:37:08.085 [INFO][5576] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="359ff3a5f6de42e9f4238ee55feeefe2f836f1d92e0fe0163218c8fccdc4b0d6" Mar 6 01:37:08.135645 containerd[1459]: 2026-03-06 01:37:08.085 [INFO][5576] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="359ff3a5f6de42e9f4238ee55feeefe2f836f1d92e0fe0163218c8fccdc4b0d6" iface="eth0" netns="" Mar 6 01:37:08.135645 containerd[1459]: 2026-03-06 01:37:08.085 [INFO][5576] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="359ff3a5f6de42e9f4238ee55feeefe2f836f1d92e0fe0163218c8fccdc4b0d6" Mar 6 01:37:08.135645 containerd[1459]: 2026-03-06 01:37:08.085 [INFO][5576] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="359ff3a5f6de42e9f4238ee55feeefe2f836f1d92e0fe0163218c8fccdc4b0d6" Mar 6 01:37:08.135645 containerd[1459]: 2026-03-06 01:37:08.115 [INFO][5584] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="359ff3a5f6de42e9f4238ee55feeefe2f836f1d92e0fe0163218c8fccdc4b0d6" HandleID="k8s-pod-network.359ff3a5f6de42e9f4238ee55feeefe2f836f1d92e0fe0163218c8fccdc4b0d6" Workload="localhost-k8s-coredns--674b8bbfcf--vdjph-eth0" Mar 6 01:37:08.135645 containerd[1459]: 2026-03-06 01:37:08.115 [INFO][5584] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:37:08.135645 containerd[1459]: 2026-03-06 01:37:08.115 [INFO][5584] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:37:08.135645 containerd[1459]: 2026-03-06 01:37:08.125 [WARNING][5584] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="359ff3a5f6de42e9f4238ee55feeefe2f836f1d92e0fe0163218c8fccdc4b0d6" HandleID="k8s-pod-network.359ff3a5f6de42e9f4238ee55feeefe2f836f1d92e0fe0163218c8fccdc4b0d6" Workload="localhost-k8s-coredns--674b8bbfcf--vdjph-eth0" Mar 6 01:37:08.135645 containerd[1459]: 2026-03-06 01:37:08.125 [INFO][5584] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="359ff3a5f6de42e9f4238ee55feeefe2f836f1d92e0fe0163218c8fccdc4b0d6" HandleID="k8s-pod-network.359ff3a5f6de42e9f4238ee55feeefe2f836f1d92e0fe0163218c8fccdc4b0d6" Workload="localhost-k8s-coredns--674b8bbfcf--vdjph-eth0" Mar 6 01:37:08.135645 containerd[1459]: 2026-03-06 01:37:08.128 [INFO][5584] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:37:08.135645 containerd[1459]: 2026-03-06 01:37:08.132 [INFO][5576] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="359ff3a5f6de42e9f4238ee55feeefe2f836f1d92e0fe0163218c8fccdc4b0d6" Mar 6 01:37:08.136356 containerd[1459]: time="2026-03-06T01:37:08.135650618Z" level=info msg="TearDown network for sandbox \"359ff3a5f6de42e9f4238ee55feeefe2f836f1d92e0fe0163218c8fccdc4b0d6\" successfully" Mar 6 01:37:08.141681 containerd[1459]: time="2026-03-06T01:37:08.141566197Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"359ff3a5f6de42e9f4238ee55feeefe2f836f1d92e0fe0163218c8fccdc4b0d6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 6 01:37:08.141771 containerd[1459]: time="2026-03-06T01:37:08.141695148Z" level=info msg="RemovePodSandbox \"359ff3a5f6de42e9f4238ee55feeefe2f836f1d92e0fe0163218c8fccdc4b0d6\" returns successfully" Mar 6 01:37:08.146753 containerd[1459]: time="2026-03-06T01:37:08.146299947Z" level=info msg="StopPodSandbox for \"f4977f8363a44687d97e0b20765c040cc42256700c86d8cc8cfc9041088e848a\"" Mar 6 01:37:08.263617 containerd[1459]: 2026-03-06 01:37:08.200 [WARNING][5602] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f4977f8363a44687d97e0b20765c040cc42256700c86d8cc8cfc9041088e848a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6cfdc76bfd--kz2gs-eth0", GenerateName:"calico-apiserver-6cfdc76bfd-", Namespace:"calico-system", SelfLink:"", UID:"1287fe69-fec1-4536-8130-77a9ee5d6e26", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 36, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cfdc76bfd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9f7f8919af3fb117a3ee7525c8161387c15e6d2874dcd310c97f6073afff886f", Pod:"calico-apiserver-6cfdc76bfd-kz2gs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calic34a6eb9234", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:37:08.263617 containerd[1459]: 2026-03-06 01:37:08.201 [INFO][5602] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f4977f8363a44687d97e0b20765c040cc42256700c86d8cc8cfc9041088e848a" Mar 6 01:37:08.263617 containerd[1459]: 2026-03-06 01:37:08.201 [INFO][5602] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f4977f8363a44687d97e0b20765c040cc42256700c86d8cc8cfc9041088e848a" iface="eth0" netns="" Mar 6 01:37:08.263617 containerd[1459]: 2026-03-06 01:37:08.201 [INFO][5602] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f4977f8363a44687d97e0b20765c040cc42256700c86d8cc8cfc9041088e848a" Mar 6 01:37:08.263617 containerd[1459]: 2026-03-06 01:37:08.201 [INFO][5602] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f4977f8363a44687d97e0b20765c040cc42256700c86d8cc8cfc9041088e848a" Mar 6 01:37:08.263617 containerd[1459]: 2026-03-06 01:37:08.236 [INFO][5610] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f4977f8363a44687d97e0b20765c040cc42256700c86d8cc8cfc9041088e848a" HandleID="k8s-pod-network.f4977f8363a44687d97e0b20765c040cc42256700c86d8cc8cfc9041088e848a" Workload="localhost-k8s-calico--apiserver--6cfdc76bfd--kz2gs-eth0" Mar 6 01:37:08.263617 containerd[1459]: 2026-03-06 01:37:08.236 [INFO][5610] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:37:08.263617 containerd[1459]: 2026-03-06 01:37:08.236 [INFO][5610] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:37:08.263617 containerd[1459]: 2026-03-06 01:37:08.247 [WARNING][5610] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f4977f8363a44687d97e0b20765c040cc42256700c86d8cc8cfc9041088e848a" HandleID="k8s-pod-network.f4977f8363a44687d97e0b20765c040cc42256700c86d8cc8cfc9041088e848a" Workload="localhost-k8s-calico--apiserver--6cfdc76bfd--kz2gs-eth0" Mar 6 01:37:08.263617 containerd[1459]: 2026-03-06 01:37:08.247 [INFO][5610] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f4977f8363a44687d97e0b20765c040cc42256700c86d8cc8cfc9041088e848a" HandleID="k8s-pod-network.f4977f8363a44687d97e0b20765c040cc42256700c86d8cc8cfc9041088e848a" Workload="localhost-k8s-calico--apiserver--6cfdc76bfd--kz2gs-eth0" Mar 6 01:37:08.263617 containerd[1459]: 2026-03-06 01:37:08.251 [INFO][5610] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:37:08.263617 containerd[1459]: 2026-03-06 01:37:08.260 [INFO][5602] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f4977f8363a44687d97e0b20765c040cc42256700c86d8cc8cfc9041088e848a" Mar 6 01:37:08.264146 containerd[1459]: time="2026-03-06T01:37:08.263694250Z" level=info msg="TearDown network for sandbox \"f4977f8363a44687d97e0b20765c040cc42256700c86d8cc8cfc9041088e848a\" successfully" Mar 6 01:37:08.264146 containerd[1459]: time="2026-03-06T01:37:08.263729886Z" level=info msg="StopPodSandbox for \"f4977f8363a44687d97e0b20765c040cc42256700c86d8cc8cfc9041088e848a\" returns successfully" Mar 6 01:37:08.264586 containerd[1459]: time="2026-03-06T01:37:08.264532384Z" level=info msg="RemovePodSandbox for \"f4977f8363a44687d97e0b20765c040cc42256700c86d8cc8cfc9041088e848a\"" Mar 6 01:37:08.264657 containerd[1459]: time="2026-03-06T01:37:08.264590672Z" level=info msg="Forcibly stopping sandbox \"f4977f8363a44687d97e0b20765c040cc42256700c86d8cc8cfc9041088e848a\"" Mar 6 01:37:08.370365 containerd[1459]: 2026-03-06 01:37:08.312 [WARNING][5628] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f4977f8363a44687d97e0b20765c040cc42256700c86d8cc8cfc9041088e848a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6cfdc76bfd--kz2gs-eth0", GenerateName:"calico-apiserver-6cfdc76bfd-", Namespace:"calico-system", SelfLink:"", UID:"1287fe69-fec1-4536-8130-77a9ee5d6e26", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 36, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cfdc76bfd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9f7f8919af3fb117a3ee7525c8161387c15e6d2874dcd310c97f6073afff886f", Pod:"calico-apiserver-6cfdc76bfd-kz2gs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calic34a6eb9234", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:37:08.370365 containerd[1459]: 2026-03-06 01:37:08.313 [INFO][5628] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f4977f8363a44687d97e0b20765c040cc42256700c86d8cc8cfc9041088e848a" Mar 6 01:37:08.370365 containerd[1459]: 2026-03-06 01:37:08.313 [INFO][5628] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f4977f8363a44687d97e0b20765c040cc42256700c86d8cc8cfc9041088e848a" iface="eth0" netns="" Mar 6 01:37:08.370365 containerd[1459]: 2026-03-06 01:37:08.313 [INFO][5628] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f4977f8363a44687d97e0b20765c040cc42256700c86d8cc8cfc9041088e848a" Mar 6 01:37:08.370365 containerd[1459]: 2026-03-06 01:37:08.313 [INFO][5628] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f4977f8363a44687d97e0b20765c040cc42256700c86d8cc8cfc9041088e848a" Mar 6 01:37:08.370365 containerd[1459]: 2026-03-06 01:37:08.352 [INFO][5637] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f4977f8363a44687d97e0b20765c040cc42256700c86d8cc8cfc9041088e848a" HandleID="k8s-pod-network.f4977f8363a44687d97e0b20765c040cc42256700c86d8cc8cfc9041088e848a" Workload="localhost-k8s-calico--apiserver--6cfdc76bfd--kz2gs-eth0" Mar 6 01:37:08.370365 containerd[1459]: 2026-03-06 01:37:08.353 [INFO][5637] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:37:08.370365 containerd[1459]: 2026-03-06 01:37:08.353 [INFO][5637] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:37:08.370365 containerd[1459]: 2026-03-06 01:37:08.362 [WARNING][5637] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f4977f8363a44687d97e0b20765c040cc42256700c86d8cc8cfc9041088e848a" HandleID="k8s-pod-network.f4977f8363a44687d97e0b20765c040cc42256700c86d8cc8cfc9041088e848a" Workload="localhost-k8s-calico--apiserver--6cfdc76bfd--kz2gs-eth0" Mar 6 01:37:08.370365 containerd[1459]: 2026-03-06 01:37:08.362 [INFO][5637] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f4977f8363a44687d97e0b20765c040cc42256700c86d8cc8cfc9041088e848a" HandleID="k8s-pod-network.f4977f8363a44687d97e0b20765c040cc42256700c86d8cc8cfc9041088e848a" Workload="localhost-k8s-calico--apiserver--6cfdc76bfd--kz2gs-eth0" Mar 6 01:37:08.370365 containerd[1459]: 2026-03-06 01:37:08.364 [INFO][5637] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:37:08.370365 containerd[1459]: 2026-03-06 01:37:08.367 [INFO][5628] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f4977f8363a44687d97e0b20765c040cc42256700c86d8cc8cfc9041088e848a" Mar 6 01:37:08.370842 containerd[1459]: time="2026-03-06T01:37:08.370472452Z" level=info msg="TearDown network for sandbox \"f4977f8363a44687d97e0b20765c040cc42256700c86d8cc8cfc9041088e848a\" successfully" Mar 6 01:37:08.376263 containerd[1459]: time="2026-03-06T01:37:08.376203847Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f4977f8363a44687d97e0b20765c040cc42256700c86d8cc8cfc9041088e848a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 6 01:37:08.376476 containerd[1459]: time="2026-03-06T01:37:08.376307811Z" level=info msg="RemovePodSandbox \"f4977f8363a44687d97e0b20765c040cc42256700c86d8cc8cfc9041088e848a\" returns successfully" Mar 6 01:37:08.377070 containerd[1459]: time="2026-03-06T01:37:08.377026985Z" level=info msg="StopPodSandbox for \"f4ddf02f91ed02b9886fe83c045d669d19efac3997df2230a475393ee8aad326\"" Mar 6 01:37:08.692221 containerd[1459]: 2026-03-06 01:37:08.629 [WARNING][5654] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f4ddf02f91ed02b9886fe83c045d669d19efac3997df2230a475393ee8aad326" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6cfdc76bfd--tpll8-eth0", GenerateName:"calico-apiserver-6cfdc76bfd-", Namespace:"calico-system", SelfLink:"", UID:"cd4720ad-a0f0-4e45-919b-1e9f495dfc32", ResourceVersion:"1095", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 36, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cfdc76bfd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"944f79b100d18e17fcaed6a23ec72873665544c32f87bf2ba7113611e55d8f2c", Pod:"calico-apiserver-6cfdc76bfd-tpll8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif7a7c6c0897", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:37:08.692221 containerd[1459]: 2026-03-06 01:37:08.629 [INFO][5654] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f4ddf02f91ed02b9886fe83c045d669d19efac3997df2230a475393ee8aad326" Mar 6 01:37:08.692221 containerd[1459]: 2026-03-06 01:37:08.629 [INFO][5654] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f4ddf02f91ed02b9886fe83c045d669d19efac3997df2230a475393ee8aad326" iface="eth0" netns="" Mar 6 01:37:08.692221 containerd[1459]: 2026-03-06 01:37:08.629 [INFO][5654] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f4ddf02f91ed02b9886fe83c045d669d19efac3997df2230a475393ee8aad326" Mar 6 01:37:08.692221 containerd[1459]: 2026-03-06 01:37:08.629 [INFO][5654] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f4ddf02f91ed02b9886fe83c045d669d19efac3997df2230a475393ee8aad326" Mar 6 01:37:08.692221 containerd[1459]: 2026-03-06 01:37:08.675 [INFO][5663] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f4ddf02f91ed02b9886fe83c045d669d19efac3997df2230a475393ee8aad326" HandleID="k8s-pod-network.f4ddf02f91ed02b9886fe83c045d669d19efac3997df2230a475393ee8aad326" Workload="localhost-k8s-calico--apiserver--6cfdc76bfd--tpll8-eth0" Mar 6 01:37:08.692221 containerd[1459]: 2026-03-06 01:37:08.676 [INFO][5663] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:37:08.692221 containerd[1459]: 2026-03-06 01:37:08.676 [INFO][5663] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:37:08.692221 containerd[1459]: 2026-03-06 01:37:08.683 [WARNING][5663] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f4ddf02f91ed02b9886fe83c045d669d19efac3997df2230a475393ee8aad326" HandleID="k8s-pod-network.f4ddf02f91ed02b9886fe83c045d669d19efac3997df2230a475393ee8aad326" Workload="localhost-k8s-calico--apiserver--6cfdc76bfd--tpll8-eth0" Mar 6 01:37:08.692221 containerd[1459]: 2026-03-06 01:37:08.684 [INFO][5663] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f4ddf02f91ed02b9886fe83c045d669d19efac3997df2230a475393ee8aad326" HandleID="k8s-pod-network.f4ddf02f91ed02b9886fe83c045d669d19efac3997df2230a475393ee8aad326" Workload="localhost-k8s-calico--apiserver--6cfdc76bfd--tpll8-eth0" Mar 6 01:37:08.692221 containerd[1459]: 2026-03-06 01:37:08.686 [INFO][5663] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:37:08.692221 containerd[1459]: 2026-03-06 01:37:08.689 [INFO][5654] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f4ddf02f91ed02b9886fe83c045d669d19efac3997df2230a475393ee8aad326" Mar 6 01:37:08.693025 containerd[1459]: time="2026-03-06T01:37:08.692260489Z" level=info msg="TearDown network for sandbox \"f4ddf02f91ed02b9886fe83c045d669d19efac3997df2230a475393ee8aad326\" successfully" Mar 6 01:37:08.693025 containerd[1459]: time="2026-03-06T01:37:08.692288451Z" level=info msg="StopPodSandbox for \"f4ddf02f91ed02b9886fe83c045d669d19efac3997df2230a475393ee8aad326\" returns successfully" Mar 6 01:37:08.693114 containerd[1459]: time="2026-03-06T01:37:08.693053422Z" level=info msg="RemovePodSandbox for \"f4ddf02f91ed02b9886fe83c045d669d19efac3997df2230a475393ee8aad326\"" Mar 6 01:37:08.693114 containerd[1459]: time="2026-03-06T01:37:08.693090251Z" level=info msg="Forcibly stopping sandbox \"f4ddf02f91ed02b9886fe83c045d669d19efac3997df2230a475393ee8aad326\"" Mar 6 01:37:08.805737 containerd[1459]: 2026-03-06 01:37:08.755 [WARNING][5681] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f4ddf02f91ed02b9886fe83c045d669d19efac3997df2230a475393ee8aad326" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6cfdc76bfd--tpll8-eth0", GenerateName:"calico-apiserver-6cfdc76bfd-", Namespace:"calico-system", SelfLink:"", UID:"cd4720ad-a0f0-4e45-919b-1e9f495dfc32", ResourceVersion:"1095", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 36, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cfdc76bfd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"944f79b100d18e17fcaed6a23ec72873665544c32f87bf2ba7113611e55d8f2c", Pod:"calico-apiserver-6cfdc76bfd-tpll8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif7a7c6c0897", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:37:08.805737 containerd[1459]: 2026-03-06 01:37:08.756 [INFO][5681] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f4ddf02f91ed02b9886fe83c045d669d19efac3997df2230a475393ee8aad326" Mar 6 01:37:08.805737 containerd[1459]: 2026-03-06 01:37:08.756 [INFO][5681] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f4ddf02f91ed02b9886fe83c045d669d19efac3997df2230a475393ee8aad326" iface="eth0" netns="" Mar 6 01:37:08.805737 containerd[1459]: 2026-03-06 01:37:08.756 [INFO][5681] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f4ddf02f91ed02b9886fe83c045d669d19efac3997df2230a475393ee8aad326" Mar 6 01:37:08.805737 containerd[1459]: 2026-03-06 01:37:08.756 [INFO][5681] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f4ddf02f91ed02b9886fe83c045d669d19efac3997df2230a475393ee8aad326" Mar 6 01:37:08.805737 containerd[1459]: 2026-03-06 01:37:08.787 [INFO][5689] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f4ddf02f91ed02b9886fe83c045d669d19efac3997df2230a475393ee8aad326" HandleID="k8s-pod-network.f4ddf02f91ed02b9886fe83c045d669d19efac3997df2230a475393ee8aad326" Workload="localhost-k8s-calico--apiserver--6cfdc76bfd--tpll8-eth0" Mar 6 01:37:08.805737 containerd[1459]: 2026-03-06 01:37:08.787 [INFO][5689] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:37:08.805737 containerd[1459]: 2026-03-06 01:37:08.787 [INFO][5689] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:37:08.805737 containerd[1459]: 2026-03-06 01:37:08.795 [WARNING][5689] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f4ddf02f91ed02b9886fe83c045d669d19efac3997df2230a475393ee8aad326" HandleID="k8s-pod-network.f4ddf02f91ed02b9886fe83c045d669d19efac3997df2230a475393ee8aad326" Workload="localhost-k8s-calico--apiserver--6cfdc76bfd--tpll8-eth0" Mar 6 01:37:08.805737 containerd[1459]: 2026-03-06 01:37:08.795 [INFO][5689] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f4ddf02f91ed02b9886fe83c045d669d19efac3997df2230a475393ee8aad326" HandleID="k8s-pod-network.f4ddf02f91ed02b9886fe83c045d669d19efac3997df2230a475393ee8aad326" Workload="localhost-k8s-calico--apiserver--6cfdc76bfd--tpll8-eth0" Mar 6 01:37:08.805737 containerd[1459]: 2026-03-06 01:37:08.798 [INFO][5689] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:37:08.805737 containerd[1459]: 2026-03-06 01:37:08.801 [INFO][5681] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f4ddf02f91ed02b9886fe83c045d669d19efac3997df2230a475393ee8aad326" Mar 6 01:37:08.805737 containerd[1459]: time="2026-03-06T01:37:08.805712383Z" level=info msg="TearDown network for sandbox \"f4ddf02f91ed02b9886fe83c045d669d19efac3997df2230a475393ee8aad326\" successfully" Mar 6 01:37:08.811393 containerd[1459]: time="2026-03-06T01:37:08.811055000Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f4ddf02f91ed02b9886fe83c045d669d19efac3997df2230a475393ee8aad326\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 6 01:37:08.811393 containerd[1459]: time="2026-03-06T01:37:08.811170244Z" level=info msg="RemovePodSandbox \"f4ddf02f91ed02b9886fe83c045d669d19efac3997df2230a475393ee8aad326\" returns successfully" Mar 6 01:37:08.812041 containerd[1459]: time="2026-03-06T01:37:08.811984155Z" level=info msg="StopPodSandbox for \"2fa7a5134a9cdda6e39f4e8123350794c9f7072408a4788bf0e8f08b6cb9c047\"" Mar 6 01:37:08.929825 containerd[1459]: 2026-03-06 01:37:08.871 [WARNING][5707] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2fa7a5134a9cdda6e39f4e8123350794c9f7072408a4788bf0e8f08b6cb9c047" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--kfs88-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"885266c2-0ca6-482d-827c-cc1c88e284cf", ResourceVersion:"1139", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 36, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7acf0a7a3264dbfc836b446d2324c24e56afb9ef331ad621f7320774565d01e8", Pod:"csi-node-driver-kfs88", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliac1c911f2c8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:37:08.929825 containerd[1459]: 2026-03-06 01:37:08.871 [INFO][5707] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2fa7a5134a9cdda6e39f4e8123350794c9f7072408a4788bf0e8f08b6cb9c047" Mar 6 01:37:08.929825 containerd[1459]: 2026-03-06 01:37:08.871 [INFO][5707] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2fa7a5134a9cdda6e39f4e8123350794c9f7072408a4788bf0e8f08b6cb9c047" iface="eth0" netns="" Mar 6 01:37:08.929825 containerd[1459]: 2026-03-06 01:37:08.871 [INFO][5707] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2fa7a5134a9cdda6e39f4e8123350794c9f7072408a4788bf0e8f08b6cb9c047" Mar 6 01:37:08.929825 containerd[1459]: 2026-03-06 01:37:08.871 [INFO][5707] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2fa7a5134a9cdda6e39f4e8123350794c9f7072408a4788bf0e8f08b6cb9c047" Mar 6 01:37:08.929825 containerd[1459]: 2026-03-06 01:37:08.912 [INFO][5715] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2fa7a5134a9cdda6e39f4e8123350794c9f7072408a4788bf0e8f08b6cb9c047" HandleID="k8s-pod-network.2fa7a5134a9cdda6e39f4e8123350794c9f7072408a4788bf0e8f08b6cb9c047" Workload="localhost-k8s-csi--node--driver--kfs88-eth0" Mar 6 01:37:08.929825 containerd[1459]: 2026-03-06 01:37:08.912 [INFO][5715] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:37:08.929825 containerd[1459]: 2026-03-06 01:37:08.912 [INFO][5715] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:37:08.929825 containerd[1459]: 2026-03-06 01:37:08.920 [WARNING][5715] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2fa7a5134a9cdda6e39f4e8123350794c9f7072408a4788bf0e8f08b6cb9c047" HandleID="k8s-pod-network.2fa7a5134a9cdda6e39f4e8123350794c9f7072408a4788bf0e8f08b6cb9c047" Workload="localhost-k8s-csi--node--driver--kfs88-eth0" Mar 6 01:37:08.929825 containerd[1459]: 2026-03-06 01:37:08.920 [INFO][5715] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2fa7a5134a9cdda6e39f4e8123350794c9f7072408a4788bf0e8f08b6cb9c047" HandleID="k8s-pod-network.2fa7a5134a9cdda6e39f4e8123350794c9f7072408a4788bf0e8f08b6cb9c047" Workload="localhost-k8s-csi--node--driver--kfs88-eth0" Mar 6 01:37:08.929825 containerd[1459]: 2026-03-06 01:37:08.923 [INFO][5715] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:37:08.929825 containerd[1459]: 2026-03-06 01:37:08.926 [INFO][5707] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2fa7a5134a9cdda6e39f4e8123350794c9f7072408a4788bf0e8f08b6cb9c047" Mar 6 01:37:08.929825 containerd[1459]: time="2026-03-06T01:37:08.929812005Z" level=info msg="TearDown network for sandbox \"2fa7a5134a9cdda6e39f4e8123350794c9f7072408a4788bf0e8f08b6cb9c047\" successfully" Mar 6 01:37:08.929825 containerd[1459]: time="2026-03-06T01:37:08.929839847Z" level=info msg="StopPodSandbox for \"2fa7a5134a9cdda6e39f4e8123350794c9f7072408a4788bf0e8f08b6cb9c047\" returns successfully" Mar 6 01:37:08.931362 containerd[1459]: time="2026-03-06T01:37:08.930849767Z" level=info msg="RemovePodSandbox for \"2fa7a5134a9cdda6e39f4e8123350794c9f7072408a4788bf0e8f08b6cb9c047\"" Mar 6 01:37:08.931362 containerd[1459]: time="2026-03-06T01:37:08.930946607Z" level=info msg="Forcibly stopping sandbox \"2fa7a5134a9cdda6e39f4e8123350794c9f7072408a4788bf0e8f08b6cb9c047\"" Mar 6 01:37:09.052254 containerd[1459]: 2026-03-06 01:37:08.997 [WARNING][5732] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2fa7a5134a9cdda6e39f4e8123350794c9f7072408a4788bf0e8f08b6cb9c047" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--kfs88-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"885266c2-0ca6-482d-827c-cc1c88e284cf", ResourceVersion:"1139", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 36, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7acf0a7a3264dbfc836b446d2324c24e56afb9ef331ad621f7320774565d01e8", Pod:"csi-node-driver-kfs88", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliac1c911f2c8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:37:09.052254 containerd[1459]: 2026-03-06 01:37:08.997 [INFO][5732] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2fa7a5134a9cdda6e39f4e8123350794c9f7072408a4788bf0e8f08b6cb9c047" Mar 6 01:37:09.052254 containerd[1459]: 2026-03-06 01:37:08.997 [INFO][5732] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2fa7a5134a9cdda6e39f4e8123350794c9f7072408a4788bf0e8f08b6cb9c047" iface="eth0" netns="" Mar 6 01:37:09.052254 containerd[1459]: 2026-03-06 01:37:08.997 [INFO][5732] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2fa7a5134a9cdda6e39f4e8123350794c9f7072408a4788bf0e8f08b6cb9c047" Mar 6 01:37:09.052254 containerd[1459]: 2026-03-06 01:37:08.997 [INFO][5732] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2fa7a5134a9cdda6e39f4e8123350794c9f7072408a4788bf0e8f08b6cb9c047" Mar 6 01:37:09.052254 containerd[1459]: 2026-03-06 01:37:09.029 [INFO][5740] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2fa7a5134a9cdda6e39f4e8123350794c9f7072408a4788bf0e8f08b6cb9c047" HandleID="k8s-pod-network.2fa7a5134a9cdda6e39f4e8123350794c9f7072408a4788bf0e8f08b6cb9c047" Workload="localhost-k8s-csi--node--driver--kfs88-eth0" Mar 6 01:37:09.052254 containerd[1459]: 2026-03-06 01:37:09.029 [INFO][5740] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:37:09.052254 containerd[1459]: 2026-03-06 01:37:09.029 [INFO][5740] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:37:09.052254 containerd[1459]: 2026-03-06 01:37:09.038 [WARNING][5740] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2fa7a5134a9cdda6e39f4e8123350794c9f7072408a4788bf0e8f08b6cb9c047" HandleID="k8s-pod-network.2fa7a5134a9cdda6e39f4e8123350794c9f7072408a4788bf0e8f08b6cb9c047" Workload="localhost-k8s-csi--node--driver--kfs88-eth0" Mar 6 01:37:09.052254 containerd[1459]: 2026-03-06 01:37:09.038 [INFO][5740] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2fa7a5134a9cdda6e39f4e8123350794c9f7072408a4788bf0e8f08b6cb9c047" HandleID="k8s-pod-network.2fa7a5134a9cdda6e39f4e8123350794c9f7072408a4788bf0e8f08b6cb9c047" Workload="localhost-k8s-csi--node--driver--kfs88-eth0" Mar 6 01:37:09.052254 containerd[1459]: 2026-03-06 01:37:09.042 [INFO][5740] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:37:09.052254 containerd[1459]: 2026-03-06 01:37:09.046 [INFO][5732] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2fa7a5134a9cdda6e39f4e8123350794c9f7072408a4788bf0e8f08b6cb9c047" Mar 6 01:37:09.052254 containerd[1459]: time="2026-03-06T01:37:09.052164702Z" level=info msg="TearDown network for sandbox \"2fa7a5134a9cdda6e39f4e8123350794c9f7072408a4788bf0e8f08b6cb9c047\" successfully" Mar 6 01:37:09.058096 containerd[1459]: time="2026-03-06T01:37:09.058007714Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2fa7a5134a9cdda6e39f4e8123350794c9f7072408a4788bf0e8f08b6cb9c047\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 6 01:37:09.058208 containerd[1459]: time="2026-03-06T01:37:09.058142175Z" level=info msg="RemovePodSandbox \"2fa7a5134a9cdda6e39f4e8123350794c9f7072408a4788bf0e8f08b6cb9c047\" returns successfully" Mar 6 01:37:09.058954 containerd[1459]: time="2026-03-06T01:37:09.058844593Z" level=info msg="StopPodSandbox for \"176f1350bf8e3bb7ff2eae0e8fe13c97aab8061fc0da31627147312c0dd2ac11\"" Mar 6 01:37:09.187090 containerd[1459]: 2026-03-06 01:37:09.120 [WARNING][5759] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="176f1350bf8e3bb7ff2eae0e8fe13c97aab8061fc0da31627147312c0dd2ac11" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--85b54d65f4--pdsbh-eth0", GenerateName:"calico-kube-controllers-85b54d65f4-", Namespace:"calico-system", SelfLink:"", UID:"69b227be-3957-4eec-9624-244977470ca6", ResourceVersion:"1113", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 36, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85b54d65f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"72714953d14d398451d80bc4d52732d53887e907f90fe629a9e98f538c527a2b", Pod:"calico-kube-controllers-85b54d65f4-pdsbh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8a032a72858", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:37:09.187090 containerd[1459]: 2026-03-06 01:37:09.121 [INFO][5759] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="176f1350bf8e3bb7ff2eae0e8fe13c97aab8061fc0da31627147312c0dd2ac11" Mar 6 01:37:09.187090 containerd[1459]: 2026-03-06 01:37:09.121 [INFO][5759] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="176f1350bf8e3bb7ff2eae0e8fe13c97aab8061fc0da31627147312c0dd2ac11" iface="eth0" netns="" Mar 6 01:37:09.187090 containerd[1459]: 2026-03-06 01:37:09.121 [INFO][5759] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="176f1350bf8e3bb7ff2eae0e8fe13c97aab8061fc0da31627147312c0dd2ac11" Mar 6 01:37:09.187090 containerd[1459]: 2026-03-06 01:37:09.121 [INFO][5759] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="176f1350bf8e3bb7ff2eae0e8fe13c97aab8061fc0da31627147312c0dd2ac11" Mar 6 01:37:09.187090 containerd[1459]: 2026-03-06 01:37:09.164 [INFO][5767] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="176f1350bf8e3bb7ff2eae0e8fe13c97aab8061fc0da31627147312c0dd2ac11" HandleID="k8s-pod-network.176f1350bf8e3bb7ff2eae0e8fe13c97aab8061fc0da31627147312c0dd2ac11" Workload="localhost-k8s-calico--kube--controllers--85b54d65f4--pdsbh-eth0" Mar 6 01:37:09.187090 containerd[1459]: 2026-03-06 01:37:09.165 [INFO][5767] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:37:09.187090 containerd[1459]: 2026-03-06 01:37:09.165 [INFO][5767] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:37:09.187090 containerd[1459]: 2026-03-06 01:37:09.176 [WARNING][5767] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="176f1350bf8e3bb7ff2eae0e8fe13c97aab8061fc0da31627147312c0dd2ac11" HandleID="k8s-pod-network.176f1350bf8e3bb7ff2eae0e8fe13c97aab8061fc0da31627147312c0dd2ac11" Workload="localhost-k8s-calico--kube--controllers--85b54d65f4--pdsbh-eth0" Mar 6 01:37:09.187090 containerd[1459]: 2026-03-06 01:37:09.177 [INFO][5767] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="176f1350bf8e3bb7ff2eae0e8fe13c97aab8061fc0da31627147312c0dd2ac11" HandleID="k8s-pod-network.176f1350bf8e3bb7ff2eae0e8fe13c97aab8061fc0da31627147312c0dd2ac11" Workload="localhost-k8s-calico--kube--controllers--85b54d65f4--pdsbh-eth0" Mar 6 01:37:09.187090 containerd[1459]: 2026-03-06 01:37:09.179 [INFO][5767] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:37:09.187090 containerd[1459]: 2026-03-06 01:37:09.183 [INFO][5759] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="176f1350bf8e3bb7ff2eae0e8fe13c97aab8061fc0da31627147312c0dd2ac11" Mar 6 01:37:09.188398 containerd[1459]: time="2026-03-06T01:37:09.188066195Z" level=info msg="TearDown network for sandbox \"176f1350bf8e3bb7ff2eae0e8fe13c97aab8061fc0da31627147312c0dd2ac11\" successfully" Mar 6 01:37:09.188398 containerd[1459]: time="2026-03-06T01:37:09.188141366Z" level=info msg="StopPodSandbox for \"176f1350bf8e3bb7ff2eae0e8fe13c97aab8061fc0da31627147312c0dd2ac11\" returns successfully" Mar 6 01:37:09.189696 containerd[1459]: time="2026-03-06T01:37:09.189188531Z" level=info msg="RemovePodSandbox for \"176f1350bf8e3bb7ff2eae0e8fe13c97aab8061fc0da31627147312c0dd2ac11\"" Mar 6 01:37:09.189696 containerd[1459]: time="2026-03-06T01:37:09.189227573Z" level=info msg="Forcibly stopping sandbox \"176f1350bf8e3bb7ff2eae0e8fe13c97aab8061fc0da31627147312c0dd2ac11\"" Mar 6 01:37:09.312202 containerd[1459]: 2026-03-06 01:37:09.256 [WARNING][5785] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="176f1350bf8e3bb7ff2eae0e8fe13c97aab8061fc0da31627147312c0dd2ac11" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--85b54d65f4--pdsbh-eth0", GenerateName:"calico-kube-controllers-85b54d65f4-", Namespace:"calico-system", SelfLink:"", UID:"69b227be-3957-4eec-9624-244977470ca6", ResourceVersion:"1113", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 36, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85b54d65f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"72714953d14d398451d80bc4d52732d53887e907f90fe629a9e98f538c527a2b", Pod:"calico-kube-controllers-85b54d65f4-pdsbh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8a032a72858", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:37:09.312202 containerd[1459]: 2026-03-06 01:37:09.257 [INFO][5785] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="176f1350bf8e3bb7ff2eae0e8fe13c97aab8061fc0da31627147312c0dd2ac11" Mar 6 01:37:09.312202 containerd[1459]: 2026-03-06 01:37:09.257 [INFO][5785] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="176f1350bf8e3bb7ff2eae0e8fe13c97aab8061fc0da31627147312c0dd2ac11" iface="eth0" netns="" Mar 6 01:37:09.312202 containerd[1459]: 2026-03-06 01:37:09.257 [INFO][5785] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="176f1350bf8e3bb7ff2eae0e8fe13c97aab8061fc0da31627147312c0dd2ac11" Mar 6 01:37:09.312202 containerd[1459]: 2026-03-06 01:37:09.257 [INFO][5785] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="176f1350bf8e3bb7ff2eae0e8fe13c97aab8061fc0da31627147312c0dd2ac11" Mar 6 01:37:09.312202 containerd[1459]: 2026-03-06 01:37:09.292 [INFO][5793] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="176f1350bf8e3bb7ff2eae0e8fe13c97aab8061fc0da31627147312c0dd2ac11" HandleID="k8s-pod-network.176f1350bf8e3bb7ff2eae0e8fe13c97aab8061fc0da31627147312c0dd2ac11" Workload="localhost-k8s-calico--kube--controllers--85b54d65f4--pdsbh-eth0" Mar 6 01:37:09.312202 containerd[1459]: 2026-03-06 01:37:09.293 [INFO][5793] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:37:09.312202 containerd[1459]: 2026-03-06 01:37:09.293 [INFO][5793] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:37:09.312202 containerd[1459]: 2026-03-06 01:37:09.303 [WARNING][5793] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="176f1350bf8e3bb7ff2eae0e8fe13c97aab8061fc0da31627147312c0dd2ac11" HandleID="k8s-pod-network.176f1350bf8e3bb7ff2eae0e8fe13c97aab8061fc0da31627147312c0dd2ac11" Workload="localhost-k8s-calico--kube--controllers--85b54d65f4--pdsbh-eth0" Mar 6 01:37:09.312202 containerd[1459]: 2026-03-06 01:37:09.303 [INFO][5793] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="176f1350bf8e3bb7ff2eae0e8fe13c97aab8061fc0da31627147312c0dd2ac11" HandleID="k8s-pod-network.176f1350bf8e3bb7ff2eae0e8fe13c97aab8061fc0da31627147312c0dd2ac11" Workload="localhost-k8s-calico--kube--controllers--85b54d65f4--pdsbh-eth0" Mar 6 01:37:09.312202 containerd[1459]: 2026-03-06 01:37:09.305 [INFO][5793] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:37:09.312202 containerd[1459]: 2026-03-06 01:37:09.309 [INFO][5785] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="176f1350bf8e3bb7ff2eae0e8fe13c97aab8061fc0da31627147312c0dd2ac11" Mar 6 01:37:09.312202 containerd[1459]: time="2026-03-06T01:37:09.312158927Z" level=info msg="TearDown network for sandbox \"176f1350bf8e3bb7ff2eae0e8fe13c97aab8061fc0da31627147312c0dd2ac11\" successfully" Mar 6 01:37:09.317348 containerd[1459]: time="2026-03-06T01:37:09.317253428Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"176f1350bf8e3bb7ff2eae0e8fe13c97aab8061fc0da31627147312c0dd2ac11\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 6 01:37:09.317348 containerd[1459]: time="2026-03-06T01:37:09.317325794Z" level=info msg="RemovePodSandbox \"176f1350bf8e3bb7ff2eae0e8fe13c97aab8061fc0da31627147312c0dd2ac11\" returns successfully" Mar 6 01:37:10.236010 kubelet[2549]: I0306 01:37:10.235879 2549 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 6 01:37:17.002135 kubelet[2549]: E0306 01:37:17.002061 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:37:18.992183 kubelet[2549]: E0306 01:37:18.992120 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:37:19.713289 systemd[1]: Started sshd@9-10.0.0.76:22-10.0.0.1:49450.service - OpenSSH per-connection server daemon (10.0.0.1:49450). Mar 6 01:37:19.825603 sshd[5856]: Accepted publickey for core from 10.0.0.1 port 49450 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:37:19.828998 sshd[5856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:37:19.838528 systemd-logind[1436]: New session 10 of user core. Mar 6 01:37:19.853295 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 6 01:37:20.506587 sshd[5856]: pam_unix(sshd:session): session closed for user core Mar 6 01:37:20.512837 systemd[1]: sshd@9-10.0.0.76:22-10.0.0.1:49450.service: Deactivated successfully. Mar 6 01:37:20.516056 systemd[1]: session-10.scope: Deactivated successfully. Mar 6 01:37:20.517950 systemd-logind[1436]: Session 10 logged out. Waiting for processes to exit. Mar 6 01:37:20.522144 systemd-logind[1436]: Removed session 10. Mar 6 01:37:20.998499 kubelet[2549]: E0306 01:37:20.998327 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:37:25.567393 systemd[1]: Started sshd@10-10.0.0.76:22-10.0.0.1:46908.service - OpenSSH per-connection server daemon (10.0.0.1:46908). Mar 6 01:37:25.663116 sshd[5895]: Accepted publickey for core from 10.0.0.1 port 46908 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:37:25.668021 sshd[5895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:37:25.686841 systemd-logind[1436]: New session 11 of user core. Mar 6 01:37:25.697137 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 6 01:37:26.111421 sshd[5895]: pam_unix(sshd:session): session closed for user core Mar 6 01:37:26.124262 systemd[1]: sshd@10-10.0.0.76:22-10.0.0.1:46908.service: Deactivated successfully. Mar 6 01:37:26.130619 systemd[1]: session-11.scope: Deactivated successfully. Mar 6 01:37:26.132212 systemd-logind[1436]: Session 11 logged out. Waiting for processes to exit. Mar 6 01:37:26.135538 systemd-logind[1436]: Removed session 11. Mar 6 01:37:30.992390 kubelet[2549]: E0306 01:37:30.992295 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:37:31.132151 systemd[1]: Started sshd@11-10.0.0.76:22-10.0.0.1:46914.service - OpenSSH per-connection server daemon (10.0.0.1:46914). Mar 6 01:37:31.454699 sshd[5935]: Accepted publickey for core from 10.0.0.1 port 46914 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:37:31.457832 sshd[5935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:37:31.486377 systemd-logind[1436]: New session 12 of user core. Mar 6 01:37:31.498132 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 6 01:37:31.575511 systemd[1]: run-containerd-runc-k8s.io-e72da28bb258cc9c9d68b265e5a15b891164dd8f215a90e8232f923cae1a01d8-runc.TbhHtp.mount: Deactivated successfully. Mar 6 01:37:32.033153 sshd[5935]: pam_unix(sshd:session): session closed for user core Mar 6 01:37:32.045289 systemd[1]: sshd@11-10.0.0.76:22-10.0.0.1:46914.service: Deactivated successfully. Mar 6 01:37:32.054181 systemd[1]: session-12.scope: Deactivated successfully. Mar 6 01:37:32.060742 systemd-logind[1436]: Session 12 logged out. Waiting for processes to exit. Mar 6 01:37:32.063203 systemd-logind[1436]: Removed session 12. Mar 6 01:37:32.588719 systemd[1]: run-containerd-runc-k8s.io-e72da28bb258cc9c9d68b265e5a15b891164dd8f215a90e8232f923cae1a01d8-runc.Ja4jOx.mount: Deactivated successfully. Mar 6 01:37:37.077093 systemd[1]: Started sshd@12-10.0.0.76:22-10.0.0.1:39228.service - OpenSSH per-connection server daemon (10.0.0.1:39228). Mar 6 01:37:37.288227 sshd[6011]: Accepted publickey for core from 10.0.0.1 port 39228 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:37:37.291110 sshd[6011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:37:37.313181 systemd-logind[1436]: New session 13 of user core. Mar 6 01:37:37.319648 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 6 01:37:37.593218 sshd[6011]: pam_unix(sshd:session): session closed for user core Mar 6 01:37:37.617361 systemd[1]: sshd@12-10.0.0.76:22-10.0.0.1:39228.service: Deactivated successfully. Mar 6 01:37:37.634756 systemd[1]: session-13.scope: Deactivated successfully. Mar 6 01:37:37.638124 systemd-logind[1436]: Session 13 logged out. Waiting for processes to exit. Mar 6 01:37:37.643667 systemd-logind[1436]: Removed session 13. Mar 6 01:37:42.600129 systemd[1]: Started sshd@13-10.0.0.76:22-10.0.0.1:44732.service - OpenSSH per-connection server daemon (10.0.0.1:44732). Mar 6 01:37:42.654275 sshd[6029]: Accepted publickey for core from 10.0.0.1 port 44732 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:37:42.656255 sshd[6029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:37:42.662575 systemd-logind[1436]: New session 14 of user core. Mar 6 01:37:42.670194 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 6 01:37:42.815087 sshd[6029]: pam_unix(sshd:session): session closed for user core Mar 6 01:37:42.820790 systemd[1]: sshd@13-10.0.0.76:22-10.0.0.1:44732.service: Deactivated successfully. Mar 6 01:37:42.823649 systemd[1]: session-14.scope: Deactivated successfully. Mar 6 01:37:42.825749 systemd-logind[1436]: Session 14 logged out. Waiting for processes to exit. Mar 6 01:37:42.827546 systemd-logind[1436]: Removed session 14. Mar 6 01:37:47.838414 systemd[1]: Started sshd@14-10.0.0.76:22-10.0.0.1:44736.service - OpenSSH per-connection server daemon (10.0.0.1:44736). Mar 6 01:37:47.910862 sshd[6044]: Accepted publickey for core from 10.0.0.1 port 44736 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:37:47.913575 sshd[6044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:37:47.919950 systemd-logind[1436]: New session 15 of user core. Mar 6 01:37:47.929164 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 6 01:37:48.110570 sshd[6044]: pam_unix(sshd:session): session closed for user core Mar 6 01:37:48.117126 systemd[1]: sshd@14-10.0.0.76:22-10.0.0.1:44736.service: Deactivated successfully. Mar 6 01:37:48.120829 systemd[1]: session-15.scope: Deactivated successfully. Mar 6 01:37:48.124400 systemd-logind[1436]: Session 15 logged out. Waiting for processes to exit. Mar 6 01:37:48.126719 systemd-logind[1436]: Removed session 15. Mar 6 01:37:53.123697 systemd[1]: Started sshd@15-10.0.0.76:22-10.0.0.1:58842.service - OpenSSH per-connection server daemon (10.0.0.1:58842). Mar 6 01:37:53.201768 sshd[6081]: Accepted publickey for core from 10.0.0.1 port 58842 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:37:53.204799 sshd[6081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:37:53.212378 systemd-logind[1436]: New session 16 of user core. Mar 6 01:37:53.225222 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 6 01:37:53.387658 sshd[6081]: pam_unix(sshd:session): session closed for user core Mar 6 01:37:53.393037 systemd[1]: sshd@15-10.0.0.76:22-10.0.0.1:58842.service: Deactivated successfully. Mar 6 01:37:53.395605 systemd[1]: session-16.scope: Deactivated successfully. Mar 6 01:37:53.396654 systemd-logind[1436]: Session 16 logged out. Waiting for processes to exit. Mar 6 01:37:53.398573 systemd-logind[1436]: Removed session 16. Mar 6 01:37:55.991761 kubelet[2549]: E0306 01:37:55.991644 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:37:58.405730 systemd[1]: Started sshd@16-10.0.0.76:22-10.0.0.1:58850.service - OpenSSH per-connection server daemon (10.0.0.1:58850). Mar 6 01:37:58.449856 sshd[6117]: Accepted publickey for core from 10.0.0.1 port 58850 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:37:58.452183 sshd[6117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:37:58.458558 systemd-logind[1436]: New session 17 of user core. Mar 6 01:37:58.469267 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 6 01:37:58.614769 sshd[6117]: pam_unix(sshd:session): session closed for user core Mar 6 01:37:58.621201 systemd[1]: sshd@16-10.0.0.76:22-10.0.0.1:58850.service: Deactivated successfully. Mar 6 01:37:58.623752 systemd[1]: session-17.scope: Deactivated successfully. Mar 6 01:37:58.624807 systemd-logind[1436]: Session 17 logged out. Waiting for processes to exit. Mar 6 01:37:58.626869 systemd-logind[1436]: Removed session 17. Mar 6 01:38:03.632722 systemd[1]: Started sshd@17-10.0.0.76:22-10.0.0.1:48934.service - OpenSSH per-connection server daemon (10.0.0.1:48934). Mar 6 01:38:03.679055 sshd[6170]: Accepted publickey for core from 10.0.0.1 port 48934 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:38:03.681937 sshd[6170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:38:03.688816 systemd-logind[1436]: New session 18 of user core. Mar 6 01:38:03.704218 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 6 01:38:03.852698 sshd[6170]: pam_unix(sshd:session): session closed for user core Mar 6 01:38:03.862662 systemd[1]: sshd@17-10.0.0.76:22-10.0.0.1:48934.service: Deactivated successfully. Mar 6 01:38:03.864815 systemd[1]: session-18.scope: Deactivated successfully. Mar 6 01:38:03.866668 systemd-logind[1436]: Session 18 logged out. Waiting for processes to exit. Mar 6 01:38:03.873371 systemd[1]: Started sshd@18-10.0.0.76:22-10.0.0.1:48944.service - OpenSSH per-connection server daemon (10.0.0.1:48944). Mar 6 01:38:03.876287 systemd-logind[1436]: Removed session 18. Mar 6 01:38:03.931988 sshd[6185]: Accepted publickey for core from 10.0.0.1 port 48944 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:38:03.933985 sshd[6185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:38:03.941362 systemd-logind[1436]: New session 19 of user core. Mar 6 01:38:03.955259 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 6 01:38:04.169734 sshd[6185]: pam_unix(sshd:session): session closed for user core Mar 6 01:38:04.185187 systemd[1]: sshd@18-10.0.0.76:22-10.0.0.1:48944.service: Deactivated successfully. Mar 6 01:38:04.188501 systemd[1]: session-19.scope: Deactivated successfully. Mar 6 01:38:04.195342 systemd-logind[1436]: Session 19 logged out. Waiting for processes to exit. Mar 6 01:38:04.204496 systemd[1]: Started sshd@19-10.0.0.76:22-10.0.0.1:48960.service - OpenSSH per-connection server daemon (10.0.0.1:48960). Mar 6 01:38:04.209457 systemd-logind[1436]: Removed session 19. Mar 6 01:38:04.245186 sshd[6197]: Accepted publickey for core from 10.0.0.1 port 48960 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:38:04.247674 sshd[6197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:38:04.253707 systemd-logind[1436]: New session 20 of user core. Mar 6 01:38:04.259156 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 6 01:38:04.390303 sshd[6197]: pam_unix(sshd:session): session closed for user core Mar 6 01:38:04.395196 systemd[1]: sshd@19-10.0.0.76:22-10.0.0.1:48960.service: Deactivated successfully. Mar 6 01:38:04.398145 systemd[1]: session-20.scope: Deactivated successfully. Mar 6 01:38:04.399070 systemd-logind[1436]: Session 20 logged out. Waiting for processes to exit. Mar 6 01:38:04.400434 systemd-logind[1436]: Removed session 20. Mar 6 01:38:06.992311 kubelet[2549]: E0306 01:38:06.992089 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:38:09.403717 systemd[1]: Started sshd@20-10.0.0.76:22-10.0.0.1:48970.service - OpenSSH per-connection server daemon (10.0.0.1:48970). Mar 6 01:38:09.449765 sshd[6213]: Accepted publickey for core from 10.0.0.1 port 48970 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:38:09.452251 sshd[6213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:38:09.458876 systemd-logind[1436]: New session 21 of user core. Mar 6 01:38:09.464164 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 6 01:38:09.624533 sshd[6213]: pam_unix(sshd:session): session closed for user core Mar 6 01:38:09.631130 systemd[1]: sshd@20-10.0.0.76:22-10.0.0.1:48970.service: Deactivated successfully. Mar 6 01:38:09.634067 systemd[1]: session-21.scope: Deactivated successfully. Mar 6 01:38:09.635602 systemd-logind[1436]: Session 21 logged out. Waiting for processes to exit. Mar 6 01:38:09.637956 systemd-logind[1436]: Removed session 21. Mar 6 01:38:12.992216 kubelet[2549]: E0306 01:38:12.992074 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:38:14.636748 systemd[1]: Started sshd@21-10.0.0.76:22-10.0.0.1:58518.service - OpenSSH per-connection server daemon (10.0.0.1:58518). Mar 6 01:38:14.712016 sshd[6266]: Accepted publickey for core from 10.0.0.1 port 58518 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:38:14.715867 sshd[6266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:38:14.723094 systemd-logind[1436]: New session 22 of user core. Mar 6 01:38:14.733226 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 6 01:38:14.901983 sshd[6266]: pam_unix(sshd:session): session closed for user core Mar 6 01:38:14.912485 systemd[1]: sshd@21-10.0.0.76:22-10.0.0.1:58518.service: Deactivated successfully. Mar 6 01:38:14.915303 systemd[1]: session-22.scope: Deactivated successfully. Mar 6 01:38:14.918058 systemd-logind[1436]: Session 22 logged out. Waiting for processes to exit. Mar 6 01:38:14.925485 systemd[1]: Started sshd@22-10.0.0.76:22-10.0.0.1:58528.service - OpenSSH per-connection server daemon (10.0.0.1:58528). Mar 6 01:38:14.927337 systemd-logind[1436]: Removed session 22. Mar 6 01:38:14.988530 sshd[6281]: Accepted publickey for core from 10.0.0.1 port 58528 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:38:14.991159 sshd[6281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:38:14.997786 systemd-logind[1436]: New session 23 of user core. Mar 6 01:38:15.005199 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 6 01:38:15.368278 sshd[6281]: pam_unix(sshd:session): session closed for user core Mar 6 01:38:15.377620 systemd[1]: sshd@22-10.0.0.76:22-10.0.0.1:58528.service: Deactivated successfully. Mar 6 01:38:15.379613 systemd[1]: session-23.scope: Deactivated successfully. Mar 6 01:38:15.381575 systemd-logind[1436]: Session 23 logged out. Waiting for processes to exit. Mar 6 01:38:15.388301 systemd[1]: Started sshd@23-10.0.0.76:22-10.0.0.1:58534.service - OpenSSH per-connection server daemon (10.0.0.1:58534). Mar 6 01:38:15.390172 systemd-logind[1436]: Removed session 23. Mar 6 01:38:15.428929 sshd[6294]: Accepted publickey for core from 10.0.0.1 port 58534 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:38:15.431332 sshd[6294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:38:15.437209 systemd-logind[1436]: New session 24 of user core. Mar 6 01:38:15.445172 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 6 01:38:16.083050 sshd[6294]: pam_unix(sshd:session): session closed for user core Mar 6 01:38:16.095486 systemd[1]: sshd@23-10.0.0.76:22-10.0.0.1:58534.service: Deactivated successfully. Mar 6 01:38:16.098697 systemd[1]: session-24.scope: Deactivated successfully. Mar 6 01:38:16.101119 systemd-logind[1436]: Session 24 logged out. Waiting for processes to exit. Mar 6 01:38:16.116460 systemd[1]: Started sshd@24-10.0.0.76:22-10.0.0.1:58540.service - OpenSSH per-connection server daemon (10.0.0.1:58540). Mar 6 01:38:16.119662 systemd-logind[1436]: Removed session 24. Mar 6 01:38:16.172943 sshd[6320]: Accepted publickey for core from 10.0.0.1 port 58540 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:38:16.175494 sshd[6320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:38:16.182573 systemd-logind[1436]: New session 25 of user core. Mar 6 01:38:16.190173 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 6 01:38:16.559712 sshd[6320]: pam_unix(sshd:session): session closed for user core Mar 6 01:38:16.574304 systemd[1]: sshd@24-10.0.0.76:22-10.0.0.1:58540.service: Deactivated successfully. Mar 6 01:38:16.576629 systemd[1]: session-25.scope: Deactivated successfully. Mar 6 01:38:16.579619 systemd-logind[1436]: Session 25 logged out. Waiting for processes to exit. Mar 6 01:38:16.590617 systemd[1]: Started sshd@25-10.0.0.76:22-10.0.0.1:58546.service - OpenSSH per-connection server daemon (10.0.0.1:58546). Mar 6 01:38:16.593642 systemd-logind[1436]: Removed session 25. Mar 6 01:38:16.661260 sshd[6334]: Accepted publickey for core from 10.0.0.1 port 58546 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:38:16.664945 sshd[6334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:38:16.677686 systemd-logind[1436]: New session 26 of user core. Mar 6 01:38:16.687192 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 6 01:38:16.838797 sshd[6334]: pam_unix(sshd:session): session closed for user core Mar 6 01:38:16.844511 systemd[1]: sshd@25-10.0.0.76:22-10.0.0.1:58546.service: Deactivated successfully. Mar 6 01:38:16.847476 systemd[1]: session-26.scope: Deactivated successfully. Mar 6 01:38:16.848584 systemd-logind[1436]: Session 26 logged out. Waiting for processes to exit. Mar 6 01:38:16.850268 systemd-logind[1436]: Removed session 26. Mar 6 01:38:21.868440 systemd[1]: Started sshd@26-10.0.0.76:22-10.0.0.1:58560.service - OpenSSH per-connection server daemon (10.0.0.1:58560). Mar 6 01:38:21.911566 sshd[6372]: Accepted publickey for core from 10.0.0.1 port 58560 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:38:21.914024 sshd[6372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:38:21.921289 systemd-logind[1436]: New session 27 of user core. Mar 6 01:38:21.935108 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 6 01:38:22.096989 sshd[6372]: pam_unix(sshd:session): session closed for user core Mar 6 01:38:22.101665 systemd[1]: sshd@26-10.0.0.76:22-10.0.0.1:58560.service: Deactivated successfully. Mar 6 01:38:22.104314 systemd[1]: session-27.scope: Deactivated successfully. Mar 6 01:38:22.107572 systemd-logind[1436]: Session 27 logged out. Waiting for processes to exit. Mar 6 01:38:22.110424 systemd-logind[1436]: Removed session 27. Mar 6 01:38:26.992201 kubelet[2549]: E0306 01:38:26.992122 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:38:27.109680 systemd[1]: Started sshd@27-10.0.0.76:22-10.0.0.1:59818.service - OpenSSH per-connection server daemon (10.0.0.1:59818). Mar 6 01:38:27.152107 sshd[6405]: Accepted publickey for core from 10.0.0.1 port 59818 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:38:27.154426 sshd[6405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:38:27.161050 systemd-logind[1436]: New session 28 of user core. Mar 6 01:38:27.167163 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 6 01:38:27.377347 sshd[6405]: pam_unix(sshd:session): session closed for user core Mar 6 01:38:27.384499 systemd[1]: sshd@27-10.0.0.76:22-10.0.0.1:59818.service: Deactivated successfully. Mar 6 01:38:27.387277 systemd[1]: session-28.scope: Deactivated successfully. Mar 6 01:38:27.388584 systemd-logind[1436]: Session 28 logged out. Waiting for processes to exit. Mar 6 01:38:27.390502 systemd-logind[1436]: Removed session 28. Mar 6 01:38:28.992084 kubelet[2549]: E0306 01:38:28.991850 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:38:32.394170 systemd[1]: Started sshd@28-10.0.0.76:22-10.0.0.1:49088.service - OpenSSH per-connection server daemon (10.0.0.1:49088). Mar 6 01:38:32.495834 sshd[6479]: Accepted publickey for core from 10.0.0.1 port 49088 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:38:32.499235 sshd[6479]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:38:32.506422 systemd-logind[1436]: New session 29 of user core. Mar 6 01:38:32.517204 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 6 01:38:32.686654 sshd[6479]: pam_unix(sshd:session): session closed for user core Mar 6 01:38:32.692228 systemd[1]: sshd@28-10.0.0.76:22-10.0.0.1:49088.service: Deactivated successfully. Mar 6 01:38:32.695210 systemd[1]: session-29.scope: Deactivated successfully. Mar 6 01:38:32.697712 systemd-logind[1436]: Session 29 logged out. Waiting for processes to exit. Mar 6 01:38:32.699824 systemd-logind[1436]: Removed session 29.