Mar 4 01:00:20.278955 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Mar 3 22:42:33 -00 2026 Mar 4 01:00:20.280492 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cfbb17c272ffeca64391861cc763ec4868ca597850b31cbd6ed67c590a72edc7 Mar 4 01:00:20.280515 kernel: BIOS-provided physical RAM map: Mar 4 01:00:20.280523 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 4 01:00:20.280531 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 4 01:00:20.280540 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 4 01:00:20.280549 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 4 01:00:20.280559 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 4 01:00:20.280569 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 4 01:00:20.280582 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 4 01:00:20.280591 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 4 01:00:20.280599 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 4 01:00:20.280721 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 4 01:00:20.280734 kernel: NX (Execute Disable) protection: active Mar 4 01:00:20.280746 kernel: APIC: Static calls initialized Mar 4 01:00:20.280853 kernel: SMBIOS 2.8 present. Mar 4 01:00:20.280867 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 4 01:00:20.280878 kernel: Hypervisor detected: KVM Mar 4 01:00:20.280889 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 4 01:00:20.280899 kernel: kvm-clock: using sched offset of 11160936396 cycles Mar 4 01:00:20.280910 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 4 01:00:20.280921 kernel: tsc: Detected 2445.426 MHz processor Mar 4 01:00:20.280932 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 4 01:00:20.280943 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 4 01:00:20.280958 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 4 01:00:20.280969 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 4 01:00:20.280988 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 4 01:00:20.280998 kernel: Using GB pages for direct mapping Mar 4 01:00:20.281009 kernel: ACPI: Early table checksum verification disabled Mar 4 01:00:20.281019 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 4 01:00:20.281030 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 01:00:20.281041 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 01:00:20.281052 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 01:00:20.281067 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 4 01:00:20.281077 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 01:00:20.281088 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 01:00:20.281099 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 01:00:20.281199 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 01:00:20.281211 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 4 01:00:20.281222 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 4 01:00:20.281239 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 4 01:00:20.281255 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 4 01:00:20.281266 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 4 01:00:20.281277 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 4 01:00:20.281564 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 4 01:00:20.281581 kernel: No NUMA configuration found Mar 4 01:00:20.281593 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 4 01:00:20.281609 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Mar 4 01:00:20.281621 kernel: Zone ranges: Mar 4 01:00:20.281632 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 4 01:00:20.281643 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 4 01:00:20.281654 kernel: Normal empty Mar 4 01:00:20.281665 kernel: Movable zone start for each node Mar 4 01:00:20.281676 kernel: Early memory node ranges Mar 4 01:00:20.281687 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 4 01:00:20.281698 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 4 01:00:20.281709 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 4 01:00:20.281724 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 4 01:00:20.281834 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 4 01:00:20.281849 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 4 01:00:20.281860 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 4 01:00:20.281871 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 4 01:00:20.281882 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 4 01:00:20.281893 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 4 01:00:20.281904 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 4 01:00:20.281915 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 4 01:00:20.281930 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 4 01:00:20.281941 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 4 01:00:20.281953 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 4 01:00:20.281964 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 4 01:00:20.281975 kernel: TSC deadline timer available Mar 4 01:00:20.281986 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 4 01:00:20.281997 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 4 01:00:20.282009 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 4 01:00:20.282244 kernel: kvm-guest: setup PV sched yield Mar 4 01:00:20.282265 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 4 01:00:20.282278 kernel: Booting paravirtualized kernel on KVM Mar 4 01:00:20.282290 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 4 01:00:20.282301 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 4 01:00:20.282312 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 4 01:00:20.282323 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 4 01:00:20.282334 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 4 01:00:20.282574 kernel: kvm-guest: PV spinlocks enabled Mar 4 01:00:20.282593 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 4 01:00:20.282611 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cfbb17c272ffeca64391861cc763ec4868ca597850b31cbd6ed67c590a72edc7 Mar 4 01:00:20.282623 kernel: random: crng init done Mar 4 01:00:20.282635 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 4 01:00:20.282646 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 4 01:00:20.282657 kernel: Fallback order for Node 0: 0 Mar 4 01:00:20.282668 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Mar 4 01:00:20.282679 kernel: Policy zone: DMA32 Mar 4 01:00:20.282690 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 4 01:00:20.282706 kernel: Memory: 2434608K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 136884K reserved, 0K cma-reserved) Mar 4 01:00:20.282717 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 4 01:00:20.282728 kernel: ftrace: allocating 37996 entries in 149 pages Mar 4 01:00:20.282739 kernel: ftrace: allocated 149 pages with 4 groups Mar 4 01:00:20.282750 kernel: Dynamic Preempt: voluntary Mar 4 01:00:20.282761 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 4 01:00:20.282774 kernel: rcu: RCU event tracing is enabled. Mar 4 01:00:20.282785 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 4 01:00:20.282797 kernel: Trampoline variant of Tasks RCU enabled. Mar 4 01:00:20.282812 kernel: Rude variant of Tasks RCU enabled. Mar 4 01:00:20.282823 kernel: Tracing variant of Tasks RCU enabled. Mar 4 01:00:20.282835 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 4 01:00:20.282846 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 4 01:00:20.283045 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 4 01:00:20.283061 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 4 01:00:20.283072 kernel: Console: colour VGA+ 80x25 Mar 4 01:00:20.283084 kernel: printk: console [ttyS0] enabled Mar 4 01:00:20.283095 kernel: ACPI: Core revision 20230628 Mar 4 01:00:20.283233 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 4 01:00:20.283245 kernel: APIC: Switch to symmetric I/O mode setup Mar 4 01:00:20.283256 kernel: x2apic enabled Mar 4 01:00:20.283267 kernel: APIC: Switched APIC routing to: physical x2apic Mar 4 01:00:20.283278 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 4 01:00:20.283289 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 4 01:00:20.283300 kernel: kvm-guest: setup PV IPIs Mar 4 01:00:20.283312 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 4 01:00:20.283340 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 4 01:00:20.283509 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 4 01:00:20.283521 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 4 01:00:20.283531 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 4 01:00:20.283547 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 4 01:00:20.283558 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 4 01:00:20.283567 kernel: Spectre V2 : Mitigation: Retpolines Mar 4 01:00:20.283580 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 4 01:00:20.283591 kernel: Speculative Store Bypass: Vulnerable Mar 4 01:00:20.283606 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 4 01:00:20.283706 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 4 01:00:20.283721 kernel: active return thunk: srso_alias_return_thunk Mar 4 01:00:20.283733 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 4 01:00:20.283743 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 4 01:00:20.283754 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 4 01:00:20.283764 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 4 01:00:20.283775 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 4 01:00:20.283791 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 4 01:00:20.283803 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 4 01:00:20.283814 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 4 01:00:20.283826 kernel: Freeing SMP alternatives memory: 32K Mar 4 01:00:20.283837 kernel: pid_max: default: 32768 minimum: 301 Mar 4 01:00:20.283847 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 4 01:00:20.283857 kernel: landlock: Up and running. Mar 4 01:00:20.283868 kernel: SELinux: Initializing. Mar 4 01:00:20.283878 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 4 01:00:20.283894 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 4 01:00:20.283905 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 4 01:00:20.283917 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 4 01:00:20.283928 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 4 01:00:20.283940 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 4 01:00:20.283951 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 4 01:00:20.283963 kernel: signal: max sigframe size: 1776 Mar 4 01:00:20.284059 kernel: rcu: Hierarchical SRCU implementation. Mar 4 01:00:20.284074 kernel: rcu: Max phase no-delay instances is 400. Mar 4 01:00:20.284091 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 4 01:00:20.284102 kernel: smp: Bringing up secondary CPUs ... Mar 4 01:00:20.284224 kernel: smpboot: x86: Booting SMP configuration: Mar 4 01:00:20.284237 kernel: .... node #0, CPUs: #1 #2 #3 Mar 4 01:00:20.284249 kernel: smp: Brought up 1 node, 4 CPUs Mar 4 01:00:20.284260 kernel: smpboot: Max logical packages: 1 Mar 4 01:00:20.284272 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 4 01:00:20.284284 kernel: devtmpfs: initialized Mar 4 01:00:20.284296 kernel: x86/mm: Memory block size: 128MB Mar 4 01:00:20.284312 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 4 01:00:20.284324 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 4 01:00:20.284335 kernel: pinctrl core: initialized pinctrl subsystem Mar 4 01:00:20.284506 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 4 01:00:20.284521 kernel: audit: initializing netlink subsys (disabled) Mar 4 01:00:20.284533 kernel: audit: type=2000 audit(1772586011.937:1): state=initialized audit_enabled=0 res=1 Mar 4 01:00:20.284544 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 4 01:00:20.284556 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 4 01:00:20.284568 kernel: cpuidle: using governor menu Mar 4 01:00:20.284584 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 4 01:00:20.284595 kernel: dca service started, version 1.12.1 Mar 4 01:00:20.284608 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 4 01:00:20.284619 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 4 01:00:20.284631 kernel: PCI: Using configuration type 1 for base access Mar 4 01:00:20.284643 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 4 01:00:20.284654 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 4 01:00:20.284666 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 4 01:00:20.284678 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 4 01:00:20.284694 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 4 01:00:20.284705 kernel: ACPI: Added _OSI(Module Device) Mar 4 01:00:20.284717 kernel: ACPI: Added _OSI(Processor Device) Mar 4 01:00:20.284729 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 4 01:00:20.284741 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 4 01:00:20.284752 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 4 01:00:20.284764 kernel: ACPI: Interpreter enabled Mar 4 01:00:20.284775 kernel: ACPI: PM: (supports S0 S3 S5) Mar 4 01:00:20.284920 kernel: ACPI: Using IOAPIC for interrupt routing Mar 4 01:00:20.284939 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 4 01:00:20.284951 kernel: PCI: Using E820 reservations for host bridge windows Mar 4 01:00:20.284963 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 4 01:00:20.284974 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 4 01:00:20.287492 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 4 01:00:20.287715 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 4 01:00:20.287911 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 4 01:00:20.287935 kernel: PCI host bridge to bus 0000:00 Mar 4 01:00:20.288867 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 4 01:00:20.289061 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 4 01:00:20.289559 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 4 01:00:20.289748 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 4 01:00:20.289920 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 4 01:00:20.290080 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 4 01:00:20.291071 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 4 01:00:20.303936 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 4 01:00:20.304452 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 4 01:00:20.304670 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 4 01:00:20.304870 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 4 01:00:20.305067 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 4 01:00:20.305339 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 4 01:00:20.305753 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 4 01:00:20.305958 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 4 01:00:20.306339 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 4 01:00:20.306631 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 4 01:00:20.306907 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 4 01:00:20.307186 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Mar 4 01:00:20.307492 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 4 01:00:20.307723 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 4 01:00:20.308077 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 4 01:00:20.308431 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Mar 4 01:00:20.308694 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Mar 4 01:00:20.308907 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 4 01:00:20.310060 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 4 01:00:20.310525 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 4 01:00:20.310724 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 4 01:00:20.311180 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 4 01:00:20.311484 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Mar 4 01:00:20.311695 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Mar 4 01:00:20.312240 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 4 01:00:20.312555 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 4 01:00:20.312582 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 4 01:00:20.312595 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 4 01:00:20.312607 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 4 01:00:20.312619 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 4 01:00:20.312630 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 4 01:00:20.312642 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 4 01:00:20.312654 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 4 01:00:20.312665 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 4 01:00:20.312681 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 4 01:00:20.312692 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 4 01:00:20.312704 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 4 01:00:20.312716 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 4 01:00:20.312727 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 4 01:00:20.312739 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 4 01:00:20.312751 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 4 01:00:20.312762 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 4 01:00:20.312774 kernel: iommu: Default domain type: Translated Mar 4 01:00:20.312786 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 4 01:00:20.312801 kernel: PCI: Using ACPI for IRQ routing Mar 4 01:00:20.312813 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 4 01:00:20.312824 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 4 01:00:20.312836 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 4 01:00:20.313074 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 4 01:00:20.313466 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 4 01:00:20.313897 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 4 01:00:20.313917 kernel: vgaarb: loaded Mar 4 01:00:20.313937 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 4 01:00:20.313949 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 4 01:00:20.313961 kernel: clocksource: Switched to clocksource kvm-clock Mar 4 01:00:20.313973 kernel: VFS: Disk quotas dquot_6.6.0 Mar 4 01:00:20.313985 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 4 01:00:20.313996 kernel: pnp: PnP ACPI init Mar 4 01:00:20.315770 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 4 01:00:20.315795 kernel: pnp: PnP ACPI: found 6 devices Mar 4 01:00:20.315815 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 4 01:00:20.315826 kernel: NET: Registered PF_INET protocol family Mar 4 01:00:20.315838 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 4 01:00:20.315850 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 4 01:00:20.315861 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 4 01:00:20.315871 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 4 01:00:20.315882 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 4 01:00:20.315893 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 4 01:00:20.315904 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 4 01:00:20.315919 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 4 01:00:20.315930 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 4 01:00:20.315941 kernel: NET: Registered PF_XDP protocol family Mar 4 01:00:20.316230 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 4 01:00:20.316529 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 4 01:00:20.316747 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 4 01:00:20.316951 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 4 01:00:20.317181 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 4 01:00:20.317451 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 4 01:00:20.317469 kernel: PCI: CLS 0 bytes, default 64 Mar 4 01:00:20.317482 kernel: Initialise system trusted keyrings Mar 4 01:00:20.317494 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 4 01:00:20.317505 kernel: Key type asymmetric registered Mar 4 01:00:20.317516 kernel: Asymmetric key parser 'x509' registered Mar 4 01:00:20.317528 kernel: hrtimer: interrupt took 3099866 ns Mar 4 01:00:20.317539 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 4 01:00:20.317551 kernel: io scheduler mq-deadline registered Mar 4 01:00:20.317568 kernel: io scheduler kyber registered Mar 4 01:00:20.317579 kernel: io scheduler bfq registered Mar 4 01:00:20.317590 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 4 01:00:20.317603 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 4 01:00:20.317615 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 4 01:00:20.317626 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 4 01:00:20.317638 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 4 01:00:20.317649 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 4 01:00:20.317661 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 4 01:00:20.317676 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 4 01:00:20.317687 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 4 01:00:20.318050 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 4 01:00:20.318068 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 4 01:00:20.318305 kernel: rtc_cmos 00:04: registered as rtc0 Mar 4 01:00:20.318761 kernel: rtc_cmos 00:04: setting system clock to 2026-03-04T01:00:18 UTC (1772586018) Mar 4 01:00:20.318983 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 4 01:00:20.318999 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 4 01:00:20.319017 kernel: NET: Registered PF_INET6 protocol family Mar 4 01:00:20.319028 kernel: Segment Routing with IPv6 Mar 4 01:00:20.319038 kernel: In-situ OAM (IOAM) with IPv6 Mar 4 01:00:20.319049 kernel: NET: Registered PF_PACKET protocol family Mar 4 01:00:20.319060 kernel: Key type dns_resolver registered Mar 4 01:00:20.319070 kernel: IPI shorthand broadcast: enabled Mar 4 01:00:20.319081 kernel: sched_clock: Marking stable (5787064478, 807636934)->(7434314951, -839613539) Mar 4 01:00:20.319092 kernel: registered taskstats version 1 Mar 4 01:00:20.319102 kernel: Loading compiled-in X.509 certificates Mar 4 01:00:20.319181 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: be1dcbe3e3dee66976c19d61f4b179b405e1c498' Mar 4 01:00:20.319193 kernel: Key type .fscrypt registered Mar 4 01:00:20.319203 kernel: Key type fscrypt-provisioning registered Mar 4 01:00:20.319214 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 4 01:00:20.319225 kernel: ima: Allocated hash algorithm: sha1 Mar 4 01:00:20.319236 kernel: ima: No architecture policies found Mar 4 01:00:20.319246 kernel: clk: Disabling unused clocks Mar 4 01:00:20.319257 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 4 01:00:20.319268 kernel: Write protecting the kernel read-only data: 36864k Mar 4 01:00:20.319282 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 4 01:00:20.319293 kernel: Run /init as init process Mar 4 01:00:20.319303 kernel: with arguments: Mar 4 01:00:20.319314 kernel: /init Mar 4 01:00:20.319326 kernel: with environment: Mar 4 01:00:20.319336 kernel: HOME=/ Mar 4 01:00:20.319420 kernel: TERM=linux Mar 4 01:00:20.319467 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 4 01:00:20.319488 systemd[1]: Detected virtualization kvm. Mar 4 01:00:20.319500 systemd[1]: Detected architecture x86-64. Mar 4 01:00:20.319511 systemd[1]: Running in initrd. Mar 4 01:00:20.319521 systemd[1]: No hostname configured, using default hostname. Mar 4 01:00:20.319532 systemd[1]: Hostname set to . Mar 4 01:00:20.319544 systemd[1]: Initializing machine ID from VM UUID. Mar 4 01:00:20.319555 systemd[1]: Queued start job for default target initrd.target. Mar 4 01:00:20.319566 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 4 01:00:20.319581 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 4 01:00:20.319594 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 4 01:00:20.319606 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 4 01:00:20.319617 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 4 01:00:20.319629 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 4 01:00:20.319644 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 4 01:00:20.319655 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 4 01:00:20.319671 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 4 01:00:20.319683 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 4 01:00:20.319742 systemd[1]: Reached target paths.target - Path Units. Mar 4 01:00:20.319755 systemd[1]: Reached target slices.target - Slice Units. Mar 4 01:00:20.319785 systemd[1]: Reached target swap.target - Swaps. Mar 4 01:00:20.319800 systemd[1]: Reached target timers.target - Timer Units. Mar 4 01:00:20.319816 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 4 01:00:20.319827 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 4 01:00:20.319839 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 4 01:00:20.319850 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 4 01:00:20.319863 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 4 01:00:20.319913 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 4 01:00:20.320150 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 4 01:00:20.320166 systemd[1]: Reached target sockets.target - Socket Units. Mar 4 01:00:20.320180 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 4 01:00:20.320232 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 4 01:00:20.320244 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 4 01:00:20.320256 systemd[1]: Starting systemd-fsck-usr.service... Mar 4 01:00:20.320267 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 4 01:00:20.320279 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 4 01:00:20.320328 systemd-journald[195]: Collecting audit messages is disabled. Mar 4 01:00:20.320511 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 01:00:20.320523 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 4 01:00:20.320535 systemd-journald[195]: Journal started Mar 4 01:00:20.320559 systemd-journald[195]: Runtime Journal (/run/log/journal/0f80a5f5d5464a05af6aaa17ed6fc48b) is 6.0M, max 48.4M, 42.3M free. Mar 4 01:00:20.333909 systemd[1]: Started systemd-journald.service - Journal Service. Mar 4 01:00:20.338975 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 4 01:00:20.342848 systemd[1]: Finished systemd-fsck-usr.service. Mar 4 01:00:20.372710 systemd-modules-load[196]: Inserted module 'overlay' Mar 4 01:00:20.386259 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 4 01:00:20.402782 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 4 01:00:20.420740 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 4 01:00:20.449787 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 4 01:00:20.478232 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 4 01:00:20.478281 kernel: Bridge firewalling registered Mar 4 01:00:20.485996 systemd-modules-load[196]: Inserted module 'br_netfilter' Mar 4 01:00:20.494789 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 4 01:00:20.830576 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 4 01:00:20.832471 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 01:00:20.895487 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 4 01:00:20.900782 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 4 01:00:20.946586 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 4 01:00:20.948678 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 4 01:00:21.009774 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 4 01:00:21.016480 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 01:00:21.035613 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 4 01:00:21.087234 dracut-cmdline[233]: dracut-dracut-053 Mar 4 01:00:21.092471 dracut-cmdline[233]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cfbb17c272ffeca64391861cc763ec4868ca597850b31cbd6ed67c590a72edc7 Mar 4 01:00:21.126581 systemd-resolved[229]: Positive Trust Anchors: Mar 4 01:00:21.126643 systemd-resolved[229]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 4 01:00:21.126729 systemd-resolved[229]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 4 01:00:21.173484 systemd-resolved[229]: Defaulting to hostname 'linux'. Mar 4 01:00:21.202906 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 4 01:00:21.208650 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 4 01:00:21.297917 kernel: SCSI subsystem initialized Mar 4 01:00:21.323663 kernel: Loading iSCSI transport class v2.0-870. Mar 4 01:00:21.353525 kernel: iscsi: registered transport (tcp) Mar 4 01:00:21.416539 kernel: iscsi: registered transport (qla4xxx) Mar 4 01:00:21.416761 kernel: QLogic iSCSI HBA Driver Mar 4 01:00:21.543692 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 4 01:00:21.562748 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 4 01:00:21.666190 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 4 01:00:21.669491 kernel: device-mapper: uevent: version 1.0.3 Mar 4 01:00:21.678254 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 4 01:00:21.796528 kernel: raid6: avx2x4 gen() 12052 MB/s Mar 4 01:00:21.816254 kernel: raid6: avx2x2 gen() 20433 MB/s Mar 4 01:00:21.836987 kernel: raid6: avx2x1 gen() 16022 MB/s Mar 4 01:00:21.837074 kernel: raid6: using algorithm avx2x2 gen() 20433 MB/s Mar 4 01:00:21.859322 kernel: raid6: .... xor() 22329 MB/s, rmw enabled Mar 4 01:00:21.859500 kernel: raid6: using avx2x2 recovery algorithm Mar 4 01:00:21.923604 kernel: xor: automatically using best checksumming function avx Mar 4 01:00:22.259545 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 4 01:00:22.301211 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 4 01:00:22.323787 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 4 01:00:22.348447 systemd-udevd[417]: Using default interface naming scheme 'v255'. Mar 4 01:00:22.358840 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 4 01:00:22.377829 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 4 01:00:22.445566 dracut-pre-trigger[426]: rd.md=0: removing MD RAID activation Mar 4 01:00:22.539997 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 4 01:00:22.566705 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 4 01:00:22.763932 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 4 01:00:22.790255 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 4 01:00:22.832074 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 4 01:00:22.837455 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 4 01:00:22.848272 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 4 01:00:22.858332 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 4 01:00:22.885326 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 4 01:00:22.908893 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 4 01:00:22.939460 kernel: libata version 3.00 loaded. Mar 4 01:00:22.940570 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 4 01:00:22.952279 kernel: cryptd: max_cpu_qlen set to 1000 Mar 4 01:00:22.966488 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 4 01:00:22.966863 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 4 01:00:22.941067 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 01:00:23.003416 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 4 01:00:23.003453 kernel: GPT:9289727 != 19775487 Mar 4 01:00:23.003464 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 4 01:00:23.003475 kernel: GPT:9289727 != 19775487 Mar 4 01:00:23.003484 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 4 01:00:23.003494 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 4 01:00:22.968319 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 4 01:00:23.003242 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 4 01:00:23.003638 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 01:00:23.042969 kernel: ahci 0000:00:1f.2: version 3.0 Mar 4 01:00:23.043300 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 4 01:00:23.043315 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 4 01:00:23.043580 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 4 01:00:23.015481 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 01:00:23.050846 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 01:00:23.072824 kernel: scsi host0: ahci Mar 4 01:00:23.073214 kernel: scsi host1: ahci Mar 4 01:00:23.073621 kernel: scsi host2: ahci Mar 4 01:00:23.073998 kernel: AVX2 version of gcm_enc/dec engaged. Mar 4 01:00:23.074017 kernel: AES CTR mode by8 optimization enabled Mar 4 01:00:23.078704 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 4 01:00:23.088860 kernel: scsi host3: ahci Mar 4 01:00:23.097852 kernel: scsi host4: ahci Mar 4 01:00:23.098111 kernel: scsi host5: ahci Mar 4 01:00:23.105527 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Mar 4 01:00:23.110829 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Mar 4 01:00:23.110872 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (461) Mar 4 01:00:23.110889 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Mar 4 01:00:23.117423 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 4 01:00:23.140240 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Mar 4 01:00:23.140264 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Mar 4 01:00:23.140275 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Mar 4 01:00:23.140285 kernel: BTRFS: device fsid 251c1416-ef37-47f1-be3f-832af5870605 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (480) Mar 4 01:00:23.156975 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 4 01:00:23.164857 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 4 01:00:23.167280 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 4 01:00:23.204708 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 4 01:00:23.699456 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 4 01:00:23.699509 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 4 01:00:23.699526 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 4 01:00:23.699541 kernel: ata3.00: applying bridge limits Mar 4 01:00:23.699555 kernel: ata3.00: configured for UDMA/100 Mar 4 01:00:23.699570 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 4 01:00:23.699585 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 4 01:00:23.699599 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 4 01:00:23.699614 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 4 01:00:23.700073 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 4 01:00:23.700091 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 4 01:00:23.700488 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 4 01:00:23.700505 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 4 01:00:23.700315 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 01:00:23.729905 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 4 01:00:23.755660 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 01:00:23.769464 disk-uuid[575]: Primary Header is updated. Mar 4 01:00:23.769464 disk-uuid[575]: Secondary Entries is updated. Mar 4 01:00:23.769464 disk-uuid[575]: Secondary Header is updated. Mar 4 01:00:23.795481 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 4 01:00:23.809855 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 4 01:00:23.840513 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 4 01:00:24.838631 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 4 01:00:24.841071 disk-uuid[579]: The operation has completed successfully. Mar 4 01:00:24.949133 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 4 01:00:24.949475 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 4 01:00:24.974782 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 4 01:00:25.002229 sh[594]: Success Mar 4 01:00:25.071607 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 4 01:00:25.172110 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 4 01:00:25.208753 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 4 01:00:25.215716 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 4 01:00:25.313639 kernel: BTRFS info (device dm-0): first mount of filesystem 251c1416-ef37-47f1-be3f-832af5870605 Mar 4 01:00:25.314258 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 4 01:00:25.314281 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 4 01:00:25.331765 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 4 01:00:25.332218 kernel: BTRFS info (device dm-0): using free space tree Mar 4 01:00:25.392551 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 4 01:00:25.400728 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 4 01:00:25.420782 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 4 01:00:25.435527 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 4 01:00:25.473508 kernel: BTRFS info (device vda6): first mount of filesystem 71a972ce-abd4-4705-b1cd-2b663b77d747 Mar 4 01:00:25.473574 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 4 01:00:25.473594 kernel: BTRFS info (device vda6): using free space tree Mar 4 01:00:25.500555 kernel: BTRFS info (device vda6): auto enabling async discard Mar 4 01:00:25.524766 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 4 01:00:25.537473 kernel: BTRFS info (device vda6): last unmount of filesystem 71a972ce-abd4-4705-b1cd-2b663b77d747 Mar 4 01:00:25.555504 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 4 01:00:25.564684 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 4 01:00:26.325066 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 4 01:00:26.355010 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 4 01:00:26.407596 ignition[702]: Ignition 2.19.0 Mar 4 01:00:26.407650 ignition[702]: Stage: fetch-offline Mar 4 01:00:26.407784 ignition[702]: no configs at "/usr/lib/ignition/base.d" Mar 4 01:00:26.407806 ignition[702]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 4 01:00:26.408142 ignition[702]: parsed url from cmdline: "" Mar 4 01:00:26.408265 ignition[702]: no config URL provided Mar 4 01:00:26.408278 ignition[702]: reading system config file "/usr/lib/ignition/user.ign" Mar 4 01:00:26.408297 ignition[702]: no config at "/usr/lib/ignition/user.ign" Mar 4 01:00:26.408496 ignition[702]: op(1): [started] loading QEMU firmware config module Mar 4 01:00:26.408548 ignition[702]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 4 01:00:26.556931 systemd-networkd[781]: lo: Link UP Mar 4 01:00:26.556953 systemd-networkd[781]: lo: Gained carrier Mar 4 01:00:26.561599 systemd-networkd[781]: Enumeration completed Mar 4 01:00:26.563423 ignition[702]: op(1): [finished] loading QEMU firmware config module Mar 4 01:00:26.563099 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 4 01:00:26.563909 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 01:00:26.563916 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 4 01:00:26.566615 systemd-networkd[781]: eth0: Link UP Mar 4 01:00:26.566623 systemd-networkd[781]: eth0: Gained carrier Mar 4 01:00:26.566637 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 01:00:26.571542 systemd[1]: Reached target network.target - Network. Mar 4 01:00:26.644545 systemd-networkd[781]: eth0: DHCPv4 address 10.0.0.50/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 4 01:00:26.955858 systemd-resolved[229]: Detected conflict on linux IN A 10.0.0.50 Mar 4 01:00:26.955917 systemd-resolved[229]: Hostname conflict, changing published hostname from 'linux' to 'linux7'. Mar 4 01:00:27.172492 ignition[702]: parsing config with SHA512: c178f92bfaa65f9b5ac92cae67b83494e59e66424dccf02426ea65c96202676fc56cd773522bfb2af2e810eb1614a7a8e067aa3b9810e30b95dd45fb1c8d3419 Mar 4 01:00:27.237762 unknown[702]: fetched base config from "system" Mar 4 01:00:27.237785 unknown[702]: fetched user config from "qemu" Mar 4 01:00:27.238248 ignition[702]: fetch-offline: fetch-offline passed Mar 4 01:00:27.248701 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 4 01:00:27.242089 ignition[702]: Ignition finished successfully Mar 4 01:00:27.254629 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 4 01:00:27.300631 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 4 01:00:27.463747 ignition[786]: Ignition 2.19.0 Mar 4 01:00:27.463808 ignition[786]: Stage: kargs Mar 4 01:00:27.464062 ignition[786]: no configs at "/usr/lib/ignition/base.d" Mar 4 01:00:27.464078 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 4 01:00:27.466112 ignition[786]: kargs: kargs passed Mar 4 01:00:27.466247 ignition[786]: Ignition finished successfully Mar 4 01:00:27.496609 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 4 01:00:27.515072 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 4 01:00:27.767796 ignition[793]: Ignition 2.19.0 Mar 4 01:00:27.767858 ignition[793]: Stage: disks Mar 4 01:00:27.768267 ignition[793]: no configs at "/usr/lib/ignition/base.d" Mar 4 01:00:27.768286 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 4 01:00:27.770102 ignition[793]: disks: disks passed Mar 4 01:00:27.770236 ignition[793]: Ignition finished successfully Mar 4 01:00:27.795805 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 4 01:00:27.800007 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 4 01:00:27.810144 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 4 01:00:27.815242 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 4 01:00:27.828496 systemd[1]: Reached target sysinit.target - System Initialization. Mar 4 01:00:27.840825 systemd[1]: Reached target basic.target - Basic System. Mar 4 01:00:27.859899 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 4 01:00:27.901985 systemd-fsck[804]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 4 01:00:27.909990 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 4 01:00:27.933986 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 4 01:00:27.996312 systemd-networkd[781]: eth0: Gained IPv6LL Mar 4 01:00:28.239498 kernel: EXT4-fs (vda9): mounted filesystem 77c4d29a-0423-4e33-8b82-61754d97532c r/w with ordered data mode. Quota mode: none. Mar 4 01:00:28.241139 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 4 01:00:28.249576 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 4 01:00:28.272890 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 4 01:00:28.301624 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 4 01:00:28.314294 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (813) Mar 4 01:00:28.310956 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 4 01:00:28.311023 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 4 01:00:28.352081 kernel: BTRFS info (device vda6): first mount of filesystem 71a972ce-abd4-4705-b1cd-2b663b77d747 Mar 4 01:00:28.352593 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 4 01:00:28.352616 kernel: BTRFS info (device vda6): using free space tree Mar 4 01:00:28.311057 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 4 01:00:28.361003 kernel: BTRFS info (device vda6): auto enabling async discard Mar 4 01:00:28.362783 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 4 01:00:28.399150 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 4 01:00:28.425787 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 4 01:00:28.530446 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Mar 4 01:00:28.544284 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Mar 4 01:00:28.562048 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Mar 4 01:00:28.583960 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Mar 4 01:00:28.965105 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 4 01:00:28.983658 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 4 01:00:29.006315 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 4 01:00:29.021557 kernel: BTRFS info (device vda6): last unmount of filesystem 71a972ce-abd4-4705-b1cd-2b663b77d747 Mar 4 01:00:29.021831 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 4 01:00:29.166519 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 4 01:00:29.203041 ignition[926]: INFO : Ignition 2.19.0 Mar 4 01:00:29.203041 ignition[926]: INFO : Stage: mount Mar 4 01:00:29.210989 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 4 01:00:29.210989 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 4 01:00:29.222769 ignition[926]: INFO : mount: mount passed Mar 4 01:00:29.222769 ignition[926]: INFO : Ignition finished successfully Mar 4 01:00:29.234638 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 4 01:00:29.253235 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 4 01:00:29.295806 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 4 01:00:29.315510 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (940) Mar 4 01:00:29.324598 kernel: BTRFS info (device vda6): first mount of filesystem 71a972ce-abd4-4705-b1cd-2b663b77d747 Mar 4 01:00:29.324675 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 4 01:00:29.324692 kernel: BTRFS info (device vda6): using free space tree Mar 4 01:00:29.338722 kernel: BTRFS info (device vda6): auto enabling async discard Mar 4 01:00:29.341108 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 4 01:00:29.522823 ignition[957]: INFO : Ignition 2.19.0 Mar 4 01:00:29.522823 ignition[957]: INFO : Stage: files Mar 4 01:00:29.531777 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 4 01:00:29.531777 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 4 01:00:29.555430 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Mar 4 01:00:29.575142 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 4 01:00:29.575142 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 4 01:00:29.630694 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 4 01:00:29.640057 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 4 01:00:29.668800 unknown[957]: wrote ssh authorized keys file for user: core Mar 4 01:00:29.675548 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 4 01:00:29.704689 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 4 01:00:29.704689 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 4 01:00:29.950692 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 4 01:00:30.502471 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 4 01:00:30.502471 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 4 01:00:30.522878 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 4 01:00:30.530753 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 4 01:00:30.537461 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 4 01:00:30.537461 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 4 01:00:30.564300 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 4 01:00:30.564300 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 4 01:00:30.995799 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 4 01:00:31.005615 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 4 01:00:31.015086 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 4 01:00:31.022342 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 4 01:00:31.033620 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 4 01:00:31.033620 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 4 01:00:31.054002 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Mar 4 01:00:31.460097 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 4 01:00:33.210150 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 4 01:00:33.210150 ignition[957]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 4 01:00:33.224423 ignition[957]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 4 01:00:33.234340 ignition[957]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 4 01:00:33.234340 ignition[957]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 4 01:00:33.234340 ignition[957]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 4 01:00:33.234340 ignition[957]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 4 01:00:33.234340 ignition[957]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 4 01:00:33.234340 ignition[957]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 4 01:00:33.234340 ignition[957]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 4 01:00:33.307835 ignition[957]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 4 01:00:33.329024 ignition[957]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 4 01:00:33.346306 ignition[957]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 4 01:00:33.346306 ignition[957]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 4 01:00:33.346306 ignition[957]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 4 01:00:33.346306 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 4 01:00:33.346306 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 4 01:00:33.346306 ignition[957]: INFO : files: files passed Mar 4 01:00:33.346306 ignition[957]: INFO : Ignition finished successfully Mar 4 01:00:33.336672 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 4 01:00:33.377178 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 4 01:00:33.396175 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 4 01:00:33.403521 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 4 01:00:33.435025 initrd-setup-root-after-ignition[985]: grep: /sysroot/oem/oem-release: No such file or directory Mar 4 01:00:33.403745 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 4 01:00:33.448527 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 4 01:00:33.448527 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 4 01:00:33.430303 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 4 01:00:33.473011 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 4 01:00:33.435692 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 4 01:00:33.454794 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 4 01:00:33.505943 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 4 01:00:33.506174 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 4 01:00:33.515149 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 4 01:00:33.524194 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 4 01:00:33.528994 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 4 01:00:33.551106 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 4 01:00:33.597106 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 4 01:00:33.638107 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 4 01:00:33.697202 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 4 01:00:33.716671 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 4 01:00:33.733873 systemd[1]: Stopped target timers.target - Timer Units. Mar 4 01:00:33.748966 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 4 01:00:33.750511 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 4 01:00:33.766991 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 4 01:00:33.783835 systemd[1]: Stopped target basic.target - Basic System. Mar 4 01:00:33.793473 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 4 01:00:33.802818 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 4 01:00:33.808750 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 4 01:00:33.815782 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 4 01:00:33.831850 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 4 01:00:33.841649 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 4 01:00:33.850977 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 4 01:00:33.862784 systemd[1]: Stopped target swap.target - Swaps. Mar 4 01:00:33.883755 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 4 01:00:33.885638 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 4 01:00:33.899938 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 4 01:00:33.919051 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 4 01:00:33.950783 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 4 01:00:33.951555 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 4 01:00:33.959506 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 4 01:00:33.960116 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 4 01:00:33.986830 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 4 01:00:33.987209 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 4 01:00:33.993046 systemd[1]: Stopped target paths.target - Path Units. Mar 4 01:00:34.006267 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 4 01:00:34.036923 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 4 01:00:34.051074 systemd[1]: Stopped target slices.target - Slice Units. Mar 4 01:00:34.059764 systemd[1]: Stopped target sockets.target - Socket Units. Mar 4 01:00:34.069449 systemd[1]: iscsid.socket: Deactivated successfully. Mar 4 01:00:34.073464 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 4 01:00:34.092182 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 4 01:00:34.101593 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 4 01:00:34.110633 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 4 01:00:34.115559 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 4 01:00:34.126211 systemd[1]: ignition-files.service: Deactivated successfully. Mar 4 01:00:34.130538 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 4 01:00:34.149757 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 4 01:00:34.159542 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 4 01:00:34.168403 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 4 01:00:34.173050 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 4 01:00:34.192120 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 4 01:00:34.192846 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 4 01:00:34.215727 ignition[1012]: INFO : Ignition 2.19.0 Mar 4 01:00:34.215727 ignition[1012]: INFO : Stage: umount Mar 4 01:00:34.222760 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 4 01:00:34.222760 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 4 01:00:34.222760 ignition[1012]: INFO : umount: umount passed Mar 4 01:00:34.222760 ignition[1012]: INFO : Ignition finished successfully Mar 4 01:00:34.233188 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 4 01:00:34.234804 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 4 01:00:34.235096 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 4 01:00:34.243201 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 4 01:00:34.243532 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 4 01:00:34.256196 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 4 01:00:34.256542 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 4 01:00:34.268664 systemd[1]: Stopped target network.target - Network. Mar 4 01:00:34.280595 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 4 01:00:34.280737 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 4 01:00:34.300412 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 4 01:00:34.300525 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 4 01:00:34.311865 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 4 01:00:34.311952 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 4 01:00:34.316509 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 4 01:00:34.316600 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 4 01:00:34.325266 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 4 01:00:34.325438 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 4 01:00:34.333704 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 4 01:00:34.342442 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 4 01:00:34.360868 systemd-networkd[781]: eth0: DHCPv6 lease lost Mar 4 01:00:34.366721 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 4 01:00:34.367148 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 4 01:00:34.384199 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 4 01:00:34.384507 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 4 01:00:34.418308 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 4 01:00:34.442623 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 4 01:00:34.442819 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 4 01:00:34.449641 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 4 01:00:34.459984 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 4 01:00:34.460319 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 4 01:00:34.489564 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 4 01:00:34.489718 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 4 01:00:34.496861 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 4 01:00:34.497006 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 4 01:00:34.513615 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 4 01:00:34.513704 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 4 01:00:34.531083 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 4 01:00:34.531333 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 4 01:00:34.566640 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 4 01:00:34.567028 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 4 01:00:34.573608 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 4 01:00:34.573693 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 4 01:00:34.580475 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 4 01:00:34.580530 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 4 01:00:34.599877 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 4 01:00:34.599980 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 4 01:00:34.613184 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 4 01:00:34.613335 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 4 01:00:34.625461 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 4 01:00:34.625550 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 01:00:34.673707 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 4 01:00:34.680295 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 4 01:00:34.680488 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 4 01:00:34.690529 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 4 01:00:34.690619 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 4 01:00:34.703637 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 4 01:00:34.703734 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 4 01:00:34.708633 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 4 01:00:34.708718 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 01:00:34.729567 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 4 01:00:34.729789 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 4 01:00:34.746897 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 4 01:00:34.754027 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 4 01:00:34.777301 systemd[1]: Switching root. Mar 4 01:00:34.818635 systemd-journald[195]: Journal stopped Mar 4 01:00:37.114844 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Mar 4 01:00:37.114941 kernel: SELinux: policy capability network_peer_controls=1 Mar 4 01:00:37.114979 kernel: SELinux: policy capability open_perms=1 Mar 4 01:00:37.114997 kernel: SELinux: policy capability extended_socket_class=1 Mar 4 01:00:37.115014 kernel: SELinux: policy capability always_check_network=0 Mar 4 01:00:37.115030 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 4 01:00:37.115048 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 4 01:00:37.115064 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 4 01:00:37.115083 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 4 01:00:37.115100 kernel: audit: type=1403 audit(1772586035.111:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 4 01:00:37.115128 systemd[1]: Successfully loaded SELinux policy in 63.609ms. Mar 4 01:00:37.115170 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 19.013ms. Mar 4 01:00:37.115193 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 4 01:00:37.115213 systemd[1]: Detected virtualization kvm. Mar 4 01:00:37.115234 systemd[1]: Detected architecture x86-64. Mar 4 01:00:37.115304 systemd[1]: Detected first boot. Mar 4 01:00:37.115325 systemd[1]: Initializing machine ID from VM UUID. Mar 4 01:00:37.115422 zram_generator::config[1057]: No configuration found. Mar 4 01:00:37.115498 systemd[1]: Populated /etc with preset unit settings. Mar 4 01:00:37.115521 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 4 01:00:37.115539 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 4 01:00:37.115556 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 4 01:00:37.115583 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 4 01:00:37.115602 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 4 01:00:37.115619 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 4 01:00:37.115635 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 4 01:00:37.115657 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 4 01:00:37.115673 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 4 01:00:37.115690 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 4 01:00:37.115706 systemd[1]: Created slice user.slice - User and Session Slice. Mar 4 01:00:37.115723 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 4 01:00:37.115740 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 4 01:00:37.115762 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 4 01:00:37.115779 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 4 01:00:37.115800 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 4 01:00:37.115825 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 4 01:00:37.115843 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 4 01:00:37.115859 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 4 01:00:37.115875 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 4 01:00:37.115894 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 4 01:00:37.115910 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 4 01:00:37.115926 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 4 01:00:37.115947 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 4 01:00:37.115964 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 4 01:00:37.115979 systemd[1]: Reached target slices.target - Slice Units. Mar 4 01:00:37.115995 systemd[1]: Reached target swap.target - Swaps. Mar 4 01:00:37.116011 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 4 01:00:37.116028 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 4 01:00:37.116048 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 4 01:00:37.116065 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 4 01:00:37.116086 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 4 01:00:37.116104 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 4 01:00:37.116129 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 4 01:00:37.116155 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 4 01:00:37.116175 systemd[1]: Mounting media.mount - External Media Directory... Mar 4 01:00:37.116195 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 01:00:37.116215 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 4 01:00:37.116237 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 4 01:00:37.116317 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 4 01:00:37.116337 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 4 01:00:37.116446 systemd[1]: Reached target machines.target - Containers. Mar 4 01:00:37.116466 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 4 01:00:37.116484 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 4 01:00:37.116500 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 4 01:00:37.116518 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 4 01:00:37.116535 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 4 01:00:37.116552 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 4 01:00:37.116569 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 4 01:00:37.116626 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 4 01:00:37.116656 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 4 01:00:37.116678 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 4 01:00:37.116695 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 4 01:00:37.116714 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 4 01:00:37.116734 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 4 01:00:37.116753 systemd[1]: Stopped systemd-fsck-usr.service. Mar 4 01:00:37.116781 kernel: fuse: init (API version 7.39) Mar 4 01:00:37.116802 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 4 01:00:37.116824 kernel: loop: module loaded Mar 4 01:00:37.116855 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 4 01:00:37.116872 kernel: ACPI: bus type drm_connector registered Mar 4 01:00:37.116922 systemd-journald[1141]: Collecting audit messages is disabled. Mar 4 01:00:37.116956 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 4 01:00:37.116973 systemd-journald[1141]: Journal started Mar 4 01:00:37.117004 systemd-journald[1141]: Runtime Journal (/run/log/journal/0f80a5f5d5464a05af6aaa17ed6fc48b) is 6.0M, max 48.4M, 42.3M free. Mar 4 01:00:36.277219 systemd[1]: Queued start job for default target multi-user.target. Mar 4 01:00:36.325901 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 4 01:00:36.327124 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 4 01:00:36.328019 systemd[1]: systemd-journald.service: Consumed 2.843s CPU time. Mar 4 01:00:37.131448 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 4 01:00:37.151032 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 4 01:00:37.151113 systemd[1]: verity-setup.service: Deactivated successfully. Mar 4 01:00:37.157210 systemd[1]: Stopped verity-setup.service. Mar 4 01:00:37.170537 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 01:00:37.184313 systemd[1]: Started systemd-journald.service - Journal Service. Mar 4 01:00:37.185632 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 4 01:00:37.189825 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 4 01:00:37.194058 systemd[1]: Mounted media.mount - External Media Directory. Mar 4 01:00:37.197833 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 4 01:00:37.202440 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 4 01:00:37.206644 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 4 01:00:37.211061 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 4 01:00:37.218856 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 4 01:00:37.229487 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 4 01:00:37.229938 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 4 01:00:37.241419 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 4 01:00:37.241729 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 4 01:00:37.250425 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 4 01:00:37.250923 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 4 01:00:37.258826 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 4 01:00:37.259233 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 4 01:00:37.282061 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 4 01:00:37.282628 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 4 01:00:37.304926 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 4 01:00:37.305207 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 4 01:00:37.313106 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 4 01:00:37.324936 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 4 01:00:37.334767 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 4 01:00:37.357022 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 4 01:00:37.374577 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 4 01:00:37.393458 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 4 01:00:37.399597 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 4 01:00:37.399722 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 4 01:00:37.406240 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 4 01:00:37.414781 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 4 01:00:37.422552 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 4 01:00:37.427028 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 4 01:00:37.430058 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 4 01:00:37.438146 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 4 01:00:37.444137 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 4 01:00:37.455926 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 4 01:00:37.463893 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 4 01:00:37.466918 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 4 01:00:37.473952 systemd-journald[1141]: Time spent on flushing to /var/log/journal/0f80a5f5d5464a05af6aaa17ed6fc48b is 23.619ms for 945 entries. Mar 4 01:00:37.473952 systemd-journald[1141]: System Journal (/var/log/journal/0f80a5f5d5464a05af6aaa17ed6fc48b) is 8.0M, max 195.6M, 187.6M free. Mar 4 01:00:37.778107 systemd-journald[1141]: Received client request to flush runtime journal. Mar 4 01:00:37.484179 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 4 01:00:38.031710 kernel: loop0: detected capacity change from 0 to 140768 Mar 4 01:00:37.714159 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 4 01:00:37.751949 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 4 01:00:37.760655 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 4 01:00:37.769922 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 4 01:00:38.001984 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 4 01:00:38.012754 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 4 01:00:38.021766 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 4 01:00:38.039765 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 4 01:00:38.055717 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 4 01:00:38.068449 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 4 01:00:38.070634 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 4 01:00:38.131609 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 4 01:00:38.144613 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 4 01:00:38.145868 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 4 01:00:38.158069 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Mar 4 01:00:38.160506 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Mar 4 01:00:38.162523 udevadm[1185]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 4 01:00:38.165489 kernel: loop1: detected capacity change from 0 to 219192 Mar 4 01:00:38.183669 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 4 01:00:38.221999 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 4 01:00:38.423933 kernel: loop2: detected capacity change from 0 to 142488 Mar 4 01:00:38.621489 kernel: loop3: detected capacity change from 0 to 140768 Mar 4 01:00:38.822554 kernel: loop4: detected capacity change from 0 to 219192 Mar 4 01:00:38.822854 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 4 01:00:38.839909 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 4 01:00:38.882540 kernel: loop5: detected capacity change from 0 to 142488 Mar 4 01:00:39.182974 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 4 01:00:39.184613 (sd-merge)[1195]: Merged extensions into '/usr'. Mar 4 01:00:39.194986 systemd[1]: Reloading requested from client PID 1171 ('systemd-sysext') (unit systemd-sysext.service)... Mar 4 01:00:39.195009 systemd[1]: Reloading... Mar 4 01:00:39.217977 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Mar 4 01:00:39.218006 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Mar 4 01:00:39.569786 zram_generator::config[1223]: No configuration found. Mar 4 01:00:40.233793 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 01:00:40.330231 systemd[1]: Reloading finished in 1134 ms. Mar 4 01:00:40.357562 ldconfig[1166]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 4 01:00:40.395816 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 4 01:00:40.403981 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 4 01:00:40.412917 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 4 01:00:40.680592 systemd[1]: Starting ensure-sysext.service... Mar 4 01:00:40.690238 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 4 01:00:40.734648 systemd[1]: Reloading requested from client PID 1263 ('systemctl') (unit ensure-sysext.service)... Mar 4 01:00:40.734711 systemd[1]: Reloading... Mar 4 01:00:41.086467 zram_generator::config[1290]: No configuration found. Mar 4 01:00:41.284255 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 4 01:00:41.289964 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 4 01:00:41.291935 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 4 01:00:41.293022 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Mar 4 01:00:41.293452 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Mar 4 01:00:41.300223 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Mar 4 01:00:41.302586 systemd-tmpfiles[1264]: Skipping /boot Mar 4 01:00:41.340921 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Mar 4 01:00:41.341064 systemd-tmpfiles[1264]: Skipping /boot Mar 4 01:00:41.615749 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 01:00:41.690163 systemd[1]: Reloading finished in 954 ms. Mar 4 01:00:41.715609 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 4 01:00:41.744491 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 4 01:00:41.782863 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 4 01:00:41.797260 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 4 01:00:41.804664 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 4 01:00:41.816070 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 4 01:00:41.831608 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 4 01:00:41.842805 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 4 01:00:41.878027 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 01:00:41.878932 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 4 01:00:41.901689 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 4 01:00:41.916665 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 4 01:00:41.937128 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 4 01:00:41.941787 augenrules[1353]: No rules Mar 4 01:00:41.942428 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 4 01:00:41.947855 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 4 01:00:41.950760 systemd-udevd[1340]: Using default interface naming scheme 'v255'. Mar 4 01:00:41.952341 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 01:00:41.954564 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 4 01:00:41.959864 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 4 01:00:41.966457 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 4 01:00:41.967017 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 4 01:00:41.972924 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 4 01:00:41.973137 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 4 01:00:41.983862 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 4 01:00:41.984433 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 4 01:00:42.006077 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 4 01:00:42.019152 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 4 01:00:42.026432 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 4 01:00:42.060133 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 4 01:00:42.093689 systemd[1]: Finished ensure-sysext.service. Mar 4 01:00:42.105804 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 01:00:42.106030 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 4 01:00:42.118689 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 4 01:00:42.125605 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 4 01:00:42.133924 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 4 01:00:42.143906 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 4 01:00:42.149657 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 4 01:00:42.158621 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 4 01:00:42.167745 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 4 01:00:42.186114 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 4 01:00:42.199955 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 4 01:00:42.200034 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 01:00:42.200894 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 4 01:00:42.201201 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 4 01:00:42.208938 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1390) Mar 4 01:00:42.211335 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 4 01:00:42.211638 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 4 01:00:42.216949 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 4 01:00:42.217197 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 4 01:00:42.221746 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 4 01:00:42.227661 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 4 01:00:42.227789 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 4 01:00:42.249016 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 4 01:00:42.249487 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 4 01:00:42.254046 systemd-resolved[1339]: Positive Trust Anchors: Mar 4 01:00:42.254100 systemd-resolved[1339]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 4 01:00:42.254128 systemd-resolved[1339]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 4 01:00:42.261254 systemd-resolved[1339]: Defaulting to hostname 'linux'. Mar 4 01:00:42.266264 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 4 01:00:42.271139 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 4 01:00:42.471563 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 4 01:00:42.521241 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 4 01:00:42.536660 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 4 01:00:42.542905 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 4 01:00:42.547274 systemd-networkd[1397]: lo: Link UP Mar 4 01:00:42.547466 systemd-networkd[1397]: lo: Gained carrier Mar 4 01:00:42.549069 systemd[1]: Reached target time-set.target - System Time Set. Mar 4 01:00:42.549921 systemd-networkd[1397]: Enumeration completed Mar 4 01:00:42.553939 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 4 01:00:42.560091 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 01:00:42.560103 systemd-networkd[1397]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 4 01:00:42.560167 systemd[1]: Reached target network.target - Network. Mar 4 01:00:42.561898 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 01:00:42.561968 systemd-networkd[1397]: eth0: Link UP Mar 4 01:00:42.561974 systemd-networkd[1397]: eth0: Gained carrier Mar 4 01:00:42.561985 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 01:00:42.581527 systemd-networkd[1397]: eth0: DHCPv4 address 10.0.0.50/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 4 01:00:42.581734 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 4 01:00:42.585892 systemd-timesyncd[1398]: Network configuration changed, trying to establish connection. Mar 4 01:00:42.587087 systemd-timesyncd[1398]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 4 01:00:42.587163 systemd-timesyncd[1398]: Initial clock synchronization to Wed 2026-03-04 01:00:42.695576 UTC. Mar 4 01:00:42.590660 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 4 01:00:42.603486 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 4 01:00:42.616496 kernel: ACPI: button: Power Button [PWRF] Mar 4 01:00:42.633440 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 4 01:00:42.633959 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 4 01:00:42.638801 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 4 01:00:42.669478 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 4 01:00:43.441864 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 01:00:43.650453 kernel: mousedev: PS/2 mouse device common for all mice Mar 4 01:00:43.711336 kernel: kvm_amd: TSC scaling supported Mar 4 01:00:43.714153 kernel: kvm_amd: Nested Virtualization enabled Mar 4 01:00:43.720770 kernel: kvm_amd: Nested Paging enabled Mar 4 01:00:43.721281 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 4 01:00:43.721306 kernel: kvm_amd: PMU virtualization is disabled Mar 4 01:00:44.039181 kernel: EDAC MC: Ver: 3.0.0 Mar 4 01:00:44.152251 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 4 01:00:44.183720 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 4 01:00:44.362281 systemd-networkd[1397]: eth0: Gained IPv6LL Mar 4 01:00:44.371916 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 4 01:00:44.734108 systemd[1]: Reached target network-online.target - Network is Online. Mar 4 01:00:44.746060 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 01:00:44.767066 lvm[1427]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 4 01:00:44.865699 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 4 01:00:44.874647 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 4 01:00:44.880540 systemd[1]: Reached target sysinit.target - System Initialization. Mar 4 01:00:44.886709 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 4 01:00:44.916182 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 4 01:00:44.922641 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 4 01:00:44.928627 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 4 01:00:44.934703 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 4 01:00:44.940833 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 4 01:00:44.940939 systemd[1]: Reached target paths.target - Path Units. Mar 4 01:00:44.945439 systemd[1]: Reached target timers.target - Timer Units. Mar 4 01:00:44.954001 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 4 01:00:44.961206 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 4 01:00:44.979098 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 4 01:00:44.985959 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 4 01:00:44.997881 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 4 01:00:45.011733 systemd[1]: Reached target sockets.target - Socket Units. Mar 4 01:00:45.020596 systemd[1]: Reached target basic.target - Basic System. Mar 4 01:00:45.025608 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 4 01:00:45.025685 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 4 01:00:45.029145 systemd[1]: Starting containerd.service - containerd container runtime... Mar 4 01:00:45.035794 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 4 01:00:45.045849 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 4 01:00:45.054601 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 4 01:00:45.066985 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 4 01:00:45.074631 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 4 01:00:45.082067 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:00:45.124142 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 4 01:00:45.176872 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 4 01:00:45.185655 jq[1437]: false Mar 4 01:00:45.186509 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 4 01:00:45.196050 dbus-daemon[1436]: [system] SELinux support is enabled Mar 4 01:00:45.198976 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 4 01:00:45.213274 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 4 01:00:45.229448 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 4 01:00:45.239554 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 4 01:00:45.247666 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 4 01:00:45.248902 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 4 01:00:45.256682 systemd[1]: Starting update-engine.service - Update Engine... Mar 4 01:00:45.266138 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 4 01:00:45.273595 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 4 01:00:45.298870 extend-filesystems[1438]: Found loop3 Mar 4 01:00:45.298870 extend-filesystems[1438]: Found loop4 Mar 4 01:00:45.298870 extend-filesystems[1438]: Found loop5 Mar 4 01:00:45.298870 extend-filesystems[1438]: Found sr0 Mar 4 01:00:45.298870 extend-filesystems[1438]: Found vda Mar 4 01:00:45.298870 extend-filesystems[1438]: Found vda1 Mar 4 01:00:45.298870 extend-filesystems[1438]: Found vda2 Mar 4 01:00:45.298870 extend-filesystems[1438]: Found vda3 Mar 4 01:00:45.298870 extend-filesystems[1438]: Found usr Mar 4 01:00:45.298870 extend-filesystems[1438]: Found vda4 Mar 4 01:00:45.369306 extend-filesystems[1438]: Found vda6 Mar 4 01:00:45.369306 extend-filesystems[1438]: Found vda7 Mar 4 01:00:45.369306 extend-filesystems[1438]: Found vda9 Mar 4 01:00:45.369306 extend-filesystems[1438]: Checking size of /dev/vda9 Mar 4 01:00:45.369306 extend-filesystems[1438]: Resized partition /dev/vda9 Mar 4 01:00:45.600721 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 4 01:00:45.303712 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 4 01:00:45.606808 extend-filesystems[1470]: resize2fs 1.47.1 (20-May-2024) Mar 4 01:00:45.304520 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 4 01:00:45.625305 jq[1457]: true Mar 4 01:00:45.305485 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 4 01:00:45.625849 update_engine[1453]: I20260304 01:00:45.590219 1453 main.cc:92] Flatcar Update Engine starting Mar 4 01:00:45.310162 systemd[1]: motdgen.service: Deactivated successfully. Mar 4 01:00:45.310630 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 4 01:00:45.313446 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 4 01:00:45.334134 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 4 01:00:45.334573 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 4 01:00:45.629932 update_engine[1453]: I20260304 01:00:45.627125 1453 update_check_scheduler.cc:74] Next update check in 10m40s Mar 4 01:00:45.565176 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 4 01:00:45.565211 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 4 01:00:45.575216 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 4 01:00:45.575243 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 4 01:00:45.627779 systemd[1]: Started update-engine.service - Update Engine. Mar 4 01:00:45.641465 jq[1471]: true Mar 4 01:00:45.646131 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 4 01:00:45.646624 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 4 01:00:45.661022 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 4 01:00:45.661785 (ntainerd)[1481]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 4 01:00:45.683909 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1385) Mar 4 01:00:45.670867 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 4 01:00:45.689004 tar[1468]: linux-amd64/LICENSE Mar 4 01:00:45.689902 tar[1468]: linux-amd64/helm Mar 4 01:00:45.695705 systemd-logind[1450]: Watching system buttons on /dev/input/event1 (Power Button) Mar 4 01:00:45.696877 systemd-logind[1450]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 4 01:00:45.736935 systemd-logind[1450]: New seat seat0. Mar 4 01:00:45.751221 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 4 01:00:45.741554 systemd[1]: Started systemd-logind.service - User Login Management. Mar 4 01:00:45.929971 extend-filesystems[1470]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 4 01:00:45.929971 extend-filesystems[1470]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 4 01:00:45.929971 extend-filesystems[1470]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 4 01:00:45.987755 extend-filesystems[1438]: Resized filesystem in /dev/vda9 Mar 4 01:00:45.987114 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 4 01:00:46.015798 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 4 01:00:46.764157 bash[1507]: Updated "/home/core/.ssh/authorized_keys" Mar 4 01:00:46.771118 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 4 01:00:46.789341 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 4 01:00:46.843441 locksmithd[1490]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 4 01:00:47.483620 sshd_keygen[1462]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 4 01:00:47.798276 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 4 01:00:47.832909 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 4 01:00:48.056760 systemd[1]: issuegen.service: Deactivated successfully. Mar 4 01:00:48.057191 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 4 01:00:48.094126 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 4 01:00:48.797946 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 4 01:00:48.901203 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 4 01:00:49.234832 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 4 01:00:49.240300 systemd[1]: Reached target getty.target - Login Prompts. Mar 4 01:00:49.787757 containerd[1481]: time="2026-03-04T01:00:49.787219489Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 4 01:00:50.093809 containerd[1481]: time="2026-03-04T01:00:50.092515729Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 4 01:00:50.101437 containerd[1481]: time="2026-03-04T01:00:50.101210689Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 4 01:00:50.101437 containerd[1481]: time="2026-03-04T01:00:50.101303320Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 4 01:00:50.101437 containerd[1481]: time="2026-03-04T01:00:50.101404866Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 4 01:00:50.101889 containerd[1481]: time="2026-03-04T01:00:50.101778318Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 4 01:00:50.101889 containerd[1481]: time="2026-03-04T01:00:50.101862748Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 4 01:00:50.102053 containerd[1481]: time="2026-03-04T01:00:50.101967524Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 4 01:00:50.102053 containerd[1481]: time="2026-03-04T01:00:50.102009191Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 4 01:00:50.102471 containerd[1481]: time="2026-03-04T01:00:50.102327986Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 4 01:00:50.102471 containerd[1481]: time="2026-03-04T01:00:50.102437149Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 4 01:00:50.102471 containerd[1481]: time="2026-03-04T01:00:50.102458832Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 4 01:00:50.102597 containerd[1481]: time="2026-03-04T01:00:50.102475535Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 4 01:00:50.102696 containerd[1481]: time="2026-03-04T01:00:50.102641699Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 4 01:00:50.103305 containerd[1481]: time="2026-03-04T01:00:50.103220930Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 4 01:00:50.103720 containerd[1481]: time="2026-03-04T01:00:50.103602522Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 4 01:00:50.103761 containerd[1481]: time="2026-03-04T01:00:50.103728498Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 4 01:00:50.104197 containerd[1481]: time="2026-03-04T01:00:50.104011220Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 4 01:00:50.104706 containerd[1481]: time="2026-03-04T01:00:50.104472574Z" level=info msg="metadata content store policy set" policy=shared Mar 4 01:00:50.114512 containerd[1481]: time="2026-03-04T01:00:50.113619957Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 4 01:00:50.114512 containerd[1481]: time="2026-03-04T01:00:50.113952457Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 4 01:00:50.114512 containerd[1481]: time="2026-03-04T01:00:50.113981837Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 4 01:00:50.114512 containerd[1481]: time="2026-03-04T01:00:50.114019389Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 4 01:00:50.114512 containerd[1481]: time="2026-03-04T01:00:50.114094190Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 4 01:00:50.114905 containerd[1481]: time="2026-03-04T01:00:50.114787745Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 4 01:00:50.118943 containerd[1481]: time="2026-03-04T01:00:50.115794690Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 4 01:00:50.118943 containerd[1481]: time="2026-03-04T01:00:50.118261032Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 4 01:00:50.118943 containerd[1481]: time="2026-03-04T01:00:50.118295646Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 4 01:00:50.118943 containerd[1481]: time="2026-03-04T01:00:50.118313797Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 4 01:00:50.118943 containerd[1481]: time="2026-03-04T01:00:50.118332402Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 4 01:00:50.118943 containerd[1481]: time="2026-03-04T01:00:50.118820520Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 4 01:00:50.119909 containerd[1481]: time="2026-03-04T01:00:50.119341621Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 4 01:00:50.121096 containerd[1481]: time="2026-03-04T01:00:50.120752508Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 4 01:00:50.121825 containerd[1481]: time="2026-03-04T01:00:50.120943475Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 4 01:00:50.121948 containerd[1481]: time="2026-03-04T01:00:50.121881274Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 4 01:00:50.121948 containerd[1481]: time="2026-03-04T01:00:50.121913755Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 4 01:00:50.121948 containerd[1481]: time="2026-03-04T01:00:50.121934845Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 4 01:00:50.122237 containerd[1481]: time="2026-03-04T01:00:50.122088321Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 4 01:00:50.122237 containerd[1481]: time="2026-03-04T01:00:50.122206620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 4 01:00:50.122237 containerd[1481]: time="2026-03-04T01:00:50.122229321Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 4 01:00:50.122539 containerd[1481]: time="2026-03-04T01:00:50.122251305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 4 01:00:50.122539 containerd[1481]: time="2026-03-04T01:00:50.122275072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 4 01:00:50.122539 containerd[1481]: time="2026-03-04T01:00:50.122459166Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 4 01:00:50.122539 containerd[1481]: time="2026-03-04T01:00:50.122477540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 4 01:00:50.122539 containerd[1481]: time="2026-03-04T01:00:50.122490198Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 4 01:00:50.122539 containerd[1481]: time="2026-03-04T01:00:50.122501890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 4 01:00:50.122539 containerd[1481]: time="2026-03-04T01:00:50.122522406Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 4 01:00:50.122539 containerd[1481]: time="2026-03-04T01:00:50.122534289Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 4 01:00:50.122539 containerd[1481]: time="2026-03-04T01:00:50.122546012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 4 01:00:50.122711 containerd[1481]: time="2026-03-04T01:00:50.122592891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 4 01:00:50.122711 containerd[1481]: time="2026-03-04T01:00:50.122608376Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 4 01:00:50.122752 containerd[1481]: time="2026-03-04T01:00:50.122716000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 4 01:00:50.122752 containerd[1481]: time="2026-03-04T01:00:50.122729311Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 4 01:00:50.122752 containerd[1481]: time="2026-03-04T01:00:50.122741276Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 4 01:00:50.123563 containerd[1481]: time="2026-03-04T01:00:50.122919876Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 4 01:00:50.123601 containerd[1481]: time="2026-03-04T01:00:50.123557164Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 4 01:00:50.123601 containerd[1481]: time="2026-03-04T01:00:50.123573646Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 4 01:00:50.123601 containerd[1481]: time="2026-03-04T01:00:50.123585901Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 4 01:00:50.123601 containerd[1481]: time="2026-03-04T01:00:50.123595399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 4 01:00:50.123680 containerd[1481]: time="2026-03-04T01:00:50.123635950Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 4 01:00:50.123680 containerd[1481]: time="2026-03-04T01:00:50.123673601Z" level=info msg="NRI interface is disabled by configuration." Mar 4 01:00:50.123713 containerd[1481]: time="2026-03-04T01:00:50.123685335Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 4 01:00:50.124924 containerd[1481]: time="2026-03-04T01:00:50.124733283Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 4 01:00:50.124924 containerd[1481]: time="2026-03-04T01:00:50.124845726Z" level=info msg="Connect containerd service" Mar 4 01:00:50.124924 containerd[1481]: time="2026-03-04T01:00:50.124890160Z" level=info msg="using legacy CRI server" Mar 4 01:00:50.124924 containerd[1481]: time="2026-03-04T01:00:50.124903411Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 4 01:00:50.127332 containerd[1481]: time="2026-03-04T01:00:50.125850025Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 4 01:00:50.129617 containerd[1481]: time="2026-03-04T01:00:50.129504832Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 4 01:00:50.130574 containerd[1481]: time="2026-03-04T01:00:50.130133667Z" level=info msg="Start subscribing containerd event" Mar 4 01:00:50.130849 containerd[1481]: time="2026-03-04T01:00:50.130709014Z" level=info msg="Start recovering state" Mar 4 01:00:50.131078 containerd[1481]: time="2026-03-04T01:00:50.130983154Z" level=info msg="Start event monitor" Mar 4 01:00:50.131265 containerd[1481]: time="2026-03-04T01:00:50.131122401Z" level=info msg="Start snapshots syncer" Mar 4 01:00:50.131265 containerd[1481]: time="2026-03-04T01:00:50.131163122Z" level=info msg="Start cni network conf syncer for default" Mar 4 01:00:50.131265 containerd[1481]: time="2026-03-04T01:00:50.131236324Z" level=info msg="Start streaming server" Mar 4 01:00:50.144077 containerd[1481]: time="2026-03-04T01:00:50.139320148Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 4 01:00:50.144077 containerd[1481]: time="2026-03-04T01:00:50.139528806Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 4 01:00:50.139948 systemd[1]: Started containerd.service - containerd container runtime. Mar 4 01:00:50.147606 containerd[1481]: time="2026-03-04T01:00:50.146864628Z" level=info msg="containerd successfully booted in 0.364628s" Mar 4 01:00:50.458531 tar[1468]: linux-amd64/README.md Mar 4 01:00:50.504951 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 4 01:00:52.902971 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 4 01:00:52.919013 systemd[1]: Started sshd@0-10.0.0.50:22-10.0.0.1:43882.service - OpenSSH per-connection server daemon (10.0.0.1:43882). Mar 4 01:00:53.061672 sshd[1544]: Accepted publickey for core from 10.0.0.1 port 43882 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:00:53.068738 sshd[1544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:00:53.121905 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:00:53.154481 systemd-logind[1450]: New session 1 of user core. Mar 4 01:00:53.171865 (kubelet)[1551]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 01:00:53.174056 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 4 01:00:53.179071 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 4 01:00:53.202910 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 4 01:00:53.233947 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 4 01:00:53.272163 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 4 01:00:53.302026 (systemd)[1554]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 4 01:00:53.962164 systemd[1554]: Queued start job for default target default.target. Mar 4 01:00:54.036960 systemd[1554]: Created slice app.slice - User Application Slice. Mar 4 01:00:54.037073 systemd[1554]: Reached target paths.target - Paths. Mar 4 01:00:54.037088 systemd[1554]: Reached target timers.target - Timers. Mar 4 01:00:54.062956 systemd[1554]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 4 01:00:54.102443 systemd[1554]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 4 01:00:54.102620 systemd[1554]: Reached target sockets.target - Sockets. Mar 4 01:00:54.102646 systemd[1554]: Reached target basic.target - Basic System. Mar 4 01:00:54.102767 systemd[1554]: Reached target default.target - Main User Target. Mar 4 01:00:54.102839 systemd[1554]: Startup finished in 778ms. Mar 4 01:00:54.103505 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 4 01:00:54.120715 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 4 01:00:54.126043 systemd[1]: Startup finished in 6.321s (kernel) + 15.825s (initrd) + 19.077s (userspace) = 41.223s. Mar 4 01:00:54.321133 systemd[1]: Started sshd@1-10.0.0.50:22-10.0.0.1:43894.service - OpenSSH per-connection server daemon (10.0.0.1:43894). Mar 4 01:00:54.816949 sshd[1571]: Accepted publickey for core from 10.0.0.1 port 43894 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:00:54.906203 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:00:54.963539 systemd-logind[1450]: New session 2 of user core. Mar 4 01:00:54.974778 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 4 01:00:55.075903 sshd[1571]: pam_unix(sshd:session): session closed for user core Mar 4 01:00:55.081790 systemd[1]: sshd@1-10.0.0.50:22-10.0.0.1:43894.service: Deactivated successfully. Mar 4 01:00:55.085619 systemd[1]: session-2.scope: Deactivated successfully. Mar 4 01:00:55.088579 systemd-logind[1450]: Session 2 logged out. Waiting for processes to exit. Mar 4 01:00:55.125126 systemd[1]: Started sshd@2-10.0.0.50:22-10.0.0.1:43898.service - OpenSSH per-connection server daemon (10.0.0.1:43898). Mar 4 01:00:55.155145 systemd-logind[1450]: Removed session 2. Mar 4 01:00:55.315495 sshd[1582]: Accepted publickey for core from 10.0.0.1 port 43898 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:00:55.318242 sshd[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:00:55.325465 systemd-logind[1450]: New session 3 of user core. Mar 4 01:00:55.340865 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 4 01:00:55.472415 sshd[1582]: pam_unix(sshd:session): session closed for user core Mar 4 01:00:55.492847 systemd[1]: sshd@2-10.0.0.50:22-10.0.0.1:43898.service: Deactivated successfully. Mar 4 01:00:55.495733 systemd[1]: session-3.scope: Deactivated successfully. Mar 4 01:00:55.498733 systemd-logind[1450]: Session 3 logged out. Waiting for processes to exit. Mar 4 01:00:55.510865 systemd[1]: Started sshd@3-10.0.0.50:22-10.0.0.1:43902.service - OpenSSH per-connection server daemon (10.0.0.1:43902). Mar 4 01:00:55.513145 systemd-logind[1450]: Removed session 3. Mar 4 01:00:55.573151 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 43902 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:00:55.577684 sshd[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:00:55.585257 systemd-logind[1450]: New session 4 of user core. Mar 4 01:00:55.594658 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 4 01:00:55.732110 sshd[1591]: pam_unix(sshd:session): session closed for user core Mar 4 01:00:55.747995 systemd[1]: sshd@3-10.0.0.50:22-10.0.0.1:43902.service: Deactivated successfully. Mar 4 01:00:55.752816 systemd[1]: session-4.scope: Deactivated successfully. Mar 4 01:00:55.756332 systemd-logind[1450]: Session 4 logged out. Waiting for processes to exit. Mar 4 01:00:55.768771 systemd[1]: Started sshd@4-10.0.0.50:22-10.0.0.1:43910.service - OpenSSH per-connection server daemon (10.0.0.1:43910). Mar 4 01:00:55.770343 systemd-logind[1450]: Removed session 4. Mar 4 01:00:55.797901 kubelet[1551]: E0304 01:00:55.797477 1551 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 01:00:55.802852 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 01:00:55.803111 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 01:00:55.803726 systemd[1]: kubelet.service: Consumed 8.193s CPU time. Mar 4 01:00:55.820485 sshd[1598]: Accepted publickey for core from 10.0.0.1 port 43910 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:00:55.822805 sshd[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:00:55.993109 systemd-logind[1450]: New session 5 of user core. Mar 4 01:00:56.003794 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 4 01:00:56.082541 sudo[1602]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 4 01:00:56.083009 sudo[1602]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 4 01:00:56.106881 sudo[1602]: pam_unix(sudo:session): session closed for user root Mar 4 01:00:56.110539 sshd[1598]: pam_unix(sshd:session): session closed for user core Mar 4 01:00:56.127699 systemd[1]: sshd@4-10.0.0.50:22-10.0.0.1:43910.service: Deactivated successfully. Mar 4 01:00:56.131061 systemd[1]: session-5.scope: Deactivated successfully. Mar 4 01:00:56.133873 systemd-logind[1450]: Session 5 logged out. Waiting for processes to exit. Mar 4 01:00:56.149751 systemd[1]: Started sshd@5-10.0.0.50:22-10.0.0.1:43918.service - OpenSSH per-connection server daemon (10.0.0.1:43918). Mar 4 01:00:56.151219 systemd-logind[1450]: Removed session 5. Mar 4 01:00:56.240252 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 43918 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:00:56.243488 sshd[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:00:56.263623 systemd-logind[1450]: New session 6 of user core. Mar 4 01:00:56.273643 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 4 01:00:56.474314 sudo[1611]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 4 01:00:56.475870 sudo[1611]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 4 01:00:56.530312 sudo[1611]: pam_unix(sudo:session): session closed for user root Mar 4 01:00:56.558912 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 4 01:00:56.559650 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 4 01:00:56.615013 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 4 01:00:56.621124 auditctl[1614]: No rules Mar 4 01:00:56.621929 systemd[1]: audit-rules.service: Deactivated successfully. Mar 4 01:00:56.622443 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 4 01:00:56.626871 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 4 01:00:56.687233 augenrules[1632]: No rules Mar 4 01:00:56.688804 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 4 01:00:56.690506 sudo[1610]: pam_unix(sudo:session): session closed for user root Mar 4 01:00:56.693903 sshd[1607]: pam_unix(sshd:session): session closed for user core Mar 4 01:00:56.709631 systemd[1]: sshd@5-10.0.0.50:22-10.0.0.1:43918.service: Deactivated successfully. Mar 4 01:00:56.712064 systemd[1]: session-6.scope: Deactivated successfully. Mar 4 01:00:56.715512 systemd-logind[1450]: Session 6 logged out. Waiting for processes to exit. Mar 4 01:00:56.727034 systemd[1]: Started sshd@6-10.0.0.50:22-10.0.0.1:43930.service - OpenSSH per-connection server daemon (10.0.0.1:43930). Mar 4 01:00:56.728740 systemd-logind[1450]: Removed session 6. Mar 4 01:00:56.795085 sshd[1640]: Accepted publickey for core from 10.0.0.1 port 43930 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:00:56.797797 sshd[1640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:00:56.807448 systemd-logind[1450]: New session 7 of user core. Mar 4 01:00:56.828805 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 4 01:00:56.908306 sudo[1643]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 4 01:00:56.909096 sudo[1643]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 4 01:01:00.540123 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 4 01:01:00.540518 (dockerd)[1661]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 4 01:01:05.332049 dockerd[1661]: time="2026-03-04T01:01:05.327760244Z" level=info msg="Starting up" Mar 4 01:01:05.970702 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 4 01:01:05.987738 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:01:06.552996 dockerd[1661]: time="2026-03-04T01:01:06.552434212Z" level=info msg="Loading containers: start." Mar 4 01:01:07.289701 kernel: Initializing XFRM netlink socket Mar 4 01:01:07.713090 systemd-networkd[1397]: docker0: Link UP Mar 4 01:01:07.753436 dockerd[1661]: time="2026-03-04T01:01:07.753133962Z" level=info msg="Loading containers: done." Mar 4 01:01:07.916856 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:01:07.920888 (kubelet)[1783]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 01:01:07.937487 dockerd[1661]: time="2026-03-04T01:01:07.937265094Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 4 01:01:07.937936 dockerd[1661]: time="2026-03-04T01:01:07.937805981Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 4 01:01:07.938305 dockerd[1661]: time="2026-03-04T01:01:07.938130644Z" level=info msg="Daemon has completed initialization" Mar 4 01:01:08.041660 dockerd[1661]: time="2026-03-04T01:01:08.040490252Z" level=info msg="API listen on /run/docker.sock" Mar 4 01:01:08.040942 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 4 01:01:08.368877 kubelet[1783]: E0304 01:01:08.366955 1783 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 01:01:08.379121 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 01:01:08.379624 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 01:01:08.382872 systemd[1]: kubelet.service: Consumed 2.423s CPU time. Mar 4 01:01:10.731281 containerd[1481]: time="2026-03-04T01:01:10.730512600Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 4 01:01:12.230788 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2608091489.mount: Deactivated successfully. Mar 4 01:01:14.973189 containerd[1481]: time="2026-03-04T01:01:14.972832872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:01:14.974855 containerd[1481]: time="2026-03-04T01:01:14.973465369Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=27074497" Mar 4 01:01:14.977456 containerd[1481]: time="2026-03-04T01:01:14.976671653Z" level=info msg="ImageCreate event name:\"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:01:14.982154 containerd[1481]: time="2026-03-04T01:01:14.982006634Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:01:14.984111 containerd[1481]: time="2026-03-04T01:01:14.983944568Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"27071096\" in 4.253273438s" Mar 4 01:01:14.984111 containerd[1481]: time="2026-03-04T01:01:14.984037096Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\"" Mar 4 01:01:14.991713 containerd[1481]: time="2026-03-04T01:01:14.991612152Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 4 01:01:17.815099 containerd[1481]: time="2026-03-04T01:01:17.814702182Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:01:17.817010 containerd[1481]: time="2026-03-04T01:01:17.815888186Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=21165823" Mar 4 01:01:17.817992 containerd[1481]: time="2026-03-04T01:01:17.817896976Z" level=info msg="ImageCreate event name:\"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:01:17.823725 containerd[1481]: time="2026-03-04T01:01:17.823670183Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:01:17.829221 containerd[1481]: time="2026-03-04T01:01:17.829163597Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"22822771\" in 2.837474531s" Mar 4 01:01:17.829309 containerd[1481]: time="2026-03-04T01:01:17.829225309Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\"" Mar 4 01:01:17.839998 containerd[1481]: time="2026-03-04T01:01:17.839838611Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 4 01:01:18.706311 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 4 01:01:18.766309 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:01:21.453696 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:01:21.466163 (kubelet)[1902]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 01:01:22.440209 kubelet[1902]: E0304 01:01:22.438696 1902 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 01:01:22.451051 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 01:01:22.453142 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 01:01:22.467100 systemd[1]: kubelet.service: Consumed 3.636s CPU time. Mar 4 01:01:23.146279 containerd[1481]: time="2026-03-04T01:01:23.141959984Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:01:23.220478 containerd[1481]: time="2026-03-04T01:01:23.154699563Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=15729824" Mar 4 01:01:23.446605 containerd[1481]: time="2026-03-04T01:01:23.425230926Z" level=info msg="ImageCreate event name:\"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:01:24.627220 containerd[1481]: time="2026-03-04T01:01:24.626525676Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:01:24.630967 containerd[1481]: time="2026-03-04T01:01:24.628504672Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"17386790\" in 6.788571417s" Mar 4 01:01:24.630967 containerd[1481]: time="2026-03-04T01:01:24.628682570Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\"" Mar 4 01:01:24.720816 containerd[1481]: time="2026-03-04T01:01:24.686833820Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 4 01:01:29.194612 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount827054871.mount: Deactivated successfully. Mar 4 01:01:30.038899 containerd[1481]: time="2026-03-04T01:01:30.038169903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:01:30.040934 containerd[1481]: time="2026-03-04T01:01:30.039324760Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=25861770" Mar 4 01:01:30.040934 containerd[1481]: time="2026-03-04T01:01:30.040706099Z" level=info msg="ImageCreate event name:\"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:01:30.044035 containerd[1481]: time="2026-03-04T01:01:30.043957171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:01:30.044591 containerd[1481]: time="2026-03-04T01:01:30.044512286Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"25860789\" in 5.357520677s" Mar 4 01:01:30.044591 containerd[1481]: time="2026-03-04T01:01:30.044580375Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\"" Mar 4 01:01:30.048930 containerd[1481]: time="2026-03-04T01:01:30.048866738Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 4 01:01:30.553454 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1674593699.mount: Deactivated successfully. Mar 4 01:01:30.935144 update_engine[1453]: I20260304 01:01:30.931766 1453 update_attempter.cc:509] Updating boot flags... Mar 4 01:01:31.194014 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1937) Mar 4 01:01:31.349544 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1937) Mar 4 01:01:31.591816 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1937) Mar 4 01:01:32.521876 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 4 01:01:32.533936 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:01:34.252144 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:01:34.279246 (kubelet)[1993]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 01:01:34.879778 kubelet[1993]: E0304 01:01:34.879344 1993 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 01:01:34.900506 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 01:01:34.900928 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 01:01:34.901794 systemd[1]: kubelet.service: Consumed 2.427s CPU time. Mar 4 01:01:36.570457 containerd[1481]: time="2026-03-04T01:01:36.569747123Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:01:36.572874 containerd[1481]: time="2026-03-04T01:01:36.571041463Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Mar 4 01:01:36.572931 containerd[1481]: time="2026-03-04T01:01:36.572858140Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:01:36.579857 containerd[1481]: time="2026-03-04T01:01:36.579762447Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:01:36.585561 containerd[1481]: time="2026-03-04T01:01:36.585087724Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 6.536141512s" Mar 4 01:01:36.585561 containerd[1481]: time="2026-03-04T01:01:36.585231524Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Mar 4 01:01:36.607277 containerd[1481]: time="2026-03-04T01:01:36.607073466Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 4 01:01:37.192939 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2059366177.mount: Deactivated successfully. Mar 4 01:01:37.205815 containerd[1481]: time="2026-03-04T01:01:37.205537721Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:01:37.207272 containerd[1481]: time="2026-03-04T01:01:37.207140876Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 4 01:01:37.211035 containerd[1481]: time="2026-03-04T01:01:37.210867520Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:01:37.217263 containerd[1481]: time="2026-03-04T01:01:37.217012452Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:01:37.218160 containerd[1481]: time="2026-03-04T01:01:37.218019562Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 610.84738ms" Mar 4 01:01:37.218160 containerd[1481]: time="2026-03-04T01:01:37.218134781Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 4 01:01:37.221907 containerd[1481]: time="2026-03-04T01:01:37.221567973Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 4 01:01:38.196140 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1748794593.mount: Deactivated successfully. Mar 4 01:01:43.997139 containerd[1481]: time="2026-03-04T01:01:43.984075394Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:01:43.997139 containerd[1481]: time="2026-03-04T01:01:43.984717930Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22860674" Mar 4 01:01:44.234817 containerd[1481]: time="2026-03-04T01:01:44.234202050Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:01:44.248494 containerd[1481]: time="2026-03-04T01:01:44.248014949Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:01:44.250996 containerd[1481]: time="2026-03-04T01:01:44.250858304Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 7.029244114s" Mar 4 01:01:44.250996 containerd[1481]: time="2026-03-04T01:01:44.250979386Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Mar 4 01:01:45.022050 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 4 01:01:45.110809 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:01:45.411714 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:01:45.412017 (kubelet)[2100]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 01:01:45.769823 kubelet[2100]: E0304 01:01:45.769622 2100 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 01:01:45.778600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 01:01:45.778902 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 01:01:50.765863 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:01:50.777089 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:01:50.836996 systemd[1]: Reloading requested from client PID 2116 ('systemctl') (unit session-7.scope)... Mar 4 01:01:50.837057 systemd[1]: Reloading... Mar 4 01:01:50.969573 zram_generator::config[2160]: No configuration found. Mar 4 01:01:51.374213 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 01:01:51.478009 systemd[1]: Reloading finished in 640 ms. Mar 4 01:01:51.548540 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 4 01:01:51.548717 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 4 01:01:51.549160 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:01:51.553840 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:01:52.075338 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:01:52.096170 (kubelet)[2204]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 4 01:01:53.642873 kubelet[2204]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 4 01:01:53.642873 kubelet[2204]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 4 01:01:53.642873 kubelet[2204]: I0304 01:01:53.642715 2204 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 4 01:01:54.013222 kubelet[2204]: I0304 01:01:54.013004 2204 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 4 01:01:54.013222 kubelet[2204]: I0304 01:01:54.013108 2204 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 4 01:01:54.013222 kubelet[2204]: I0304 01:01:54.013315 2204 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 4 01:01:54.014080 kubelet[2204]: I0304 01:01:54.013329 2204 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 4 01:01:54.014080 kubelet[2204]: I0304 01:01:54.013964 2204 server.go:956] "Client rotation is on, will bootstrap in background" Mar 4 01:01:54.127075 kubelet[2204]: E0304 01:01:54.126959 2204 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.50:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 4 01:01:54.128668 kubelet[2204]: I0304 01:01:54.128583 2204 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 4 01:01:54.141256 kubelet[2204]: E0304 01:01:54.141201 2204 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 4 01:01:54.141484 kubelet[2204]: I0304 01:01:54.141320 2204 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 4 01:01:54.158022 kubelet[2204]: I0304 01:01:54.157937 2204 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 4 01:01:54.160411 kubelet[2204]: I0304 01:01:54.160263 2204 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 4 01:01:54.161313 kubelet[2204]: I0304 01:01:54.160492 2204 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 4 01:01:54.161677 kubelet[2204]: I0304 01:01:54.161533 2204 topology_manager.go:138] "Creating topology manager with none policy" Mar 4 01:01:54.161677 kubelet[2204]: I0304 01:01:54.161561 2204 container_manager_linux.go:306] "Creating device plugin manager" Mar 4 01:01:54.162098 kubelet[2204]: I0304 01:01:54.161972 2204 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 4 01:01:54.165915 kubelet[2204]: I0304 01:01:54.165730 2204 state_mem.go:36] "Initialized new in-memory state store" Mar 4 01:01:54.167495 kubelet[2204]: I0304 01:01:54.167241 2204 kubelet.go:475] "Attempting to sync node with API server" Mar 4 01:01:54.167879 kubelet[2204]: I0304 01:01:54.167673 2204 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 4 01:01:54.168163 kubelet[2204]: I0304 01:01:54.167908 2204 kubelet.go:387] "Adding apiserver pod source" Mar 4 01:01:54.168163 kubelet[2204]: I0304 01:01:54.168104 2204 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 4 01:01:54.170526 kubelet[2204]: E0304 01:01:54.170251 2204 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 4 01:01:54.171293 kubelet[2204]: E0304 01:01:54.171102 2204 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 4 01:01:54.397709 kubelet[2204]: I0304 01:01:54.395464 2204 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 4 01:01:54.403442 kubelet[2204]: I0304 01:01:54.401686 2204 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 4 01:01:54.403442 kubelet[2204]: I0304 01:01:54.402118 2204 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 4 01:01:54.403442 kubelet[2204]: W0304 01:01:54.402695 2204 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 4 01:01:54.414687 kubelet[2204]: I0304 01:01:54.414644 2204 server.go:1262] "Started kubelet" Mar 4 01:01:54.415442 kubelet[2204]: I0304 01:01:54.415166 2204 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 4 01:01:54.420446 kubelet[2204]: I0304 01:01:54.420333 2204 server.go:310] "Adding debug handlers to kubelet server" Mar 4 01:01:54.422768 kubelet[2204]: I0304 01:01:54.422645 2204 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 4 01:01:54.423000 kubelet[2204]: I0304 01:01:54.422929 2204 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 4 01:01:54.423999 kubelet[2204]: I0304 01:01:54.423981 2204 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 4 01:01:54.425079 kubelet[2204]: E0304 01:01:54.425061 2204 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 4 01:01:54.426523 kubelet[2204]: I0304 01:01:54.426498 2204 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 4 01:01:54.427694 kubelet[2204]: E0304 01:01:54.425135 2204 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.50:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.50:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18997dadf3775c56 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-04 01:01:54.41450095 +0000 UTC m=+2.293498587,LastTimestamp:2026-03-04 01:01:54.41450095 +0000 UTC m=+2.293498587,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 4 01:01:54.428679 kubelet[2204]: I0304 01:01:54.428526 2204 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 4 01:01:54.430122 kubelet[2204]: I0304 01:01:54.430102 2204 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 4 01:01:54.430650 kubelet[2204]: I0304 01:01:54.430633 2204 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 4 01:01:54.430771 kubelet[2204]: E0304 01:01:54.430655 2204 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 4 01:01:54.431074 kubelet[2204]: I0304 01:01:54.430941 2204 reconciler.go:29] "Reconciler: start to sync state" Mar 4 01:01:54.431245 kubelet[2204]: E0304 01:01:54.431168 2204 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="200ms" Mar 4 01:01:54.432166 kubelet[2204]: E0304 01:01:54.431778 2204 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 4 01:01:54.434232 kubelet[2204]: I0304 01:01:54.434027 2204 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 4 01:01:54.437291 kubelet[2204]: I0304 01:01:54.437245 2204 factory.go:223] Registration of the containerd container factory successfully Mar 4 01:01:54.437291 kubelet[2204]: I0304 01:01:54.437269 2204 factory.go:223] Registration of the systemd container factory successfully Mar 4 01:01:54.486337 kubelet[2204]: I0304 01:01:54.485776 2204 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 4 01:01:54.486337 kubelet[2204]: I0304 01:01:54.485798 2204 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 4 01:01:54.486337 kubelet[2204]: I0304 01:01:54.485947 2204 state_mem.go:36] "Initialized new in-memory state store" Mar 4 01:01:54.490900 kubelet[2204]: I0304 01:01:54.490881 2204 policy_none.go:49] "None policy: Start" Mar 4 01:01:54.491028 kubelet[2204]: I0304 01:01:54.491013 2204 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 4 01:01:54.491138 kubelet[2204]: I0304 01:01:54.491079 2204 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 4 01:01:54.492328 kubelet[2204]: I0304 01:01:54.492305 2204 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 4 01:01:54.494321 kubelet[2204]: I0304 01:01:54.494226 2204 policy_none.go:47] "Start" Mar 4 01:01:54.496033 kubelet[2204]: I0304 01:01:54.495907 2204 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 4 01:01:54.496338 kubelet[2204]: I0304 01:01:54.496252 2204 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 4 01:01:54.498063 kubelet[2204]: I0304 01:01:54.496533 2204 kubelet.go:2428] "Starting kubelet main sync loop" Mar 4 01:01:54.498063 kubelet[2204]: E0304 01:01:54.496610 2204 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 4 01:01:54.498685 kubelet[2204]: E0304 01:01:54.498652 2204 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 4 01:01:54.507563 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 4 01:01:54.531073 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 4 01:01:54.531543 kubelet[2204]: E0304 01:01:54.531308 2204 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 4 01:01:54.537434 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 4 01:01:54.558044 kubelet[2204]: E0304 01:01:54.556294 2204 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 4 01:01:54.558044 kubelet[2204]: I0304 01:01:54.556975 2204 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 4 01:01:54.558044 kubelet[2204]: I0304 01:01:54.557036 2204 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 4 01:01:54.558044 kubelet[2204]: I0304 01:01:54.557869 2204 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 4 01:01:54.576455 kubelet[2204]: E0304 01:01:54.574235 2204 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 4 01:01:54.576455 kubelet[2204]: E0304 01:01:54.574319 2204 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 4 01:01:54.621990 systemd[1]: Created slice kubepods-burstable-podde67561edd222a7cc93975407ca4873e.slice - libcontainer container kubepods-burstable-podde67561edd222a7cc93975407ca4873e.slice. Mar 4 01:01:54.632610 kubelet[2204]: I0304 01:01:54.632457 2204 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/de67561edd222a7cc93975407ca4873e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"de67561edd222a7cc93975407ca4873e\") " pod="kube-system/kube-apiserver-localhost" Mar 4 01:01:54.632610 kubelet[2204]: I0304 01:01:54.632547 2204 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:01:54.632610 kubelet[2204]: I0304 01:01:54.632578 2204 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:01:54.632610 kubelet[2204]: I0304 01:01:54.632610 2204 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:01:54.632862 kubelet[2204]: I0304 01:01:54.632638 2204 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/de67561edd222a7cc93975407ca4873e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"de67561edd222a7cc93975407ca4873e\") " pod="kube-system/kube-apiserver-localhost" Mar 4 01:01:54.632862 kubelet[2204]: I0304 01:01:54.632716 2204 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/de67561edd222a7cc93975407ca4873e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"de67561edd222a7cc93975407ca4873e\") " pod="kube-system/kube-apiserver-localhost" Mar 4 01:01:54.632956 kubelet[2204]: I0304 01:01:54.632897 2204 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:01:54.632956 kubelet[2204]: E0304 01:01:54.632479 2204 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="400ms" Mar 4 01:01:54.632956 kubelet[2204]: I0304 01:01:54.632930 2204 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:01:54.633024 kubelet[2204]: I0304 01:01:54.632954 2204 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 4 01:01:54.636067 kubelet[2204]: E0304 01:01:54.635985 2204 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:01:54.641526 systemd[1]: Created slice kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice - libcontainer container kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice. Mar 4 01:01:54.645783 kubelet[2204]: E0304 01:01:54.644696 2204 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:01:54.646699 systemd[1]: Created slice kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice - libcontainer container kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice. Mar 4 01:01:54.649595 kubelet[2204]: E0304 01:01:54.649446 2204 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:01:54.660723 kubelet[2204]: I0304 01:01:54.660198 2204 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 4 01:01:54.660723 kubelet[2204]: E0304 01:01:54.660687 2204 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" Mar 4 01:01:54.864005 kubelet[2204]: I0304 01:01:54.863886 2204 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 4 01:01:54.864297 kubelet[2204]: E0304 01:01:54.864195 2204 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" Mar 4 01:01:54.941963 kubelet[2204]: E0304 01:01:54.941695 2204 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:54.944535 containerd[1481]: time="2026-03-04T01:01:54.944314152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:de67561edd222a7cc93975407ca4873e,Namespace:kube-system,Attempt:0,}" Mar 4 01:01:54.948929 kubelet[2204]: E0304 01:01:54.948747 2204 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:54.949604 containerd[1481]: time="2026-03-04T01:01:54.949548028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,}" Mar 4 01:01:54.953616 kubelet[2204]: E0304 01:01:54.953534 2204 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:54.954440 containerd[1481]: time="2026-03-04T01:01:54.954212595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,}" Mar 4 01:01:55.043686 kubelet[2204]: E0304 01:01:55.043078 2204 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="800ms" Mar 4 01:01:55.243006 kubelet[2204]: E0304 01:01:55.242830 2204 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 4 01:01:55.268042 kubelet[2204]: I0304 01:01:55.267911 2204 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 4 01:01:55.268639 kubelet[2204]: E0304 01:01:55.268507 2204 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" Mar 4 01:01:55.287240 kubelet[2204]: E0304 01:01:55.287083 2204 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.50:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.50:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18997dadf3775c56 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-04 01:01:54.41450095 +0000 UTC m=+2.293498587,LastTimestamp:2026-03-04 01:01:54.41450095 +0000 UTC m=+2.293498587,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 4 01:01:55.292487 kubelet[2204]: E0304 01:01:55.292320 2204 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 4 01:01:55.347197 kubelet[2204]: E0304 01:01:55.347092 2204 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 4 01:01:55.503115 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1801917526.mount: Deactivated successfully. Mar 4 01:01:55.513829 containerd[1481]: time="2026-03-04T01:01:55.513554847Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 01:01:55.518584 containerd[1481]: time="2026-03-04T01:01:55.518483731Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 4 01:01:55.519983 containerd[1481]: time="2026-03-04T01:01:55.519830329Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 01:01:55.521553 containerd[1481]: time="2026-03-04T01:01:55.521463297Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 01:01:55.523642 containerd[1481]: time="2026-03-04T01:01:55.523547734Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 01:01:55.524650 containerd[1481]: time="2026-03-04T01:01:55.524532424Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 4 01:01:55.525639 containerd[1481]: time="2026-03-04T01:01:55.525605603Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 4 01:01:55.530251 containerd[1481]: time="2026-03-04T01:01:55.530168580Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 01:01:55.534094 containerd[1481]: time="2026-03-04T01:01:55.533940628Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 584.322375ms" Mar 4 01:01:55.541115 containerd[1481]: time="2026-03-04T01:01:55.540650205Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 586.329113ms" Mar 4 01:01:55.542325 containerd[1481]: time="2026-03-04T01:01:55.542177184Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 597.54287ms" Mar 4 01:01:55.549784 kubelet[2204]: E0304 01:01:55.549688 2204 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 4 01:01:55.851672 kubelet[2204]: E0304 01:01:55.848699 2204 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="1.6s" Mar 4 01:01:56.477073 kubelet[2204]: E0304 01:01:56.476995 2204 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.50:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 4 01:01:56.506486 kubelet[2204]: I0304 01:01:56.506174 2204 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 4 01:01:56.506824 kubelet[2204]: E0304 01:01:56.506691 2204 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" Mar 4 01:01:56.536443 containerd[1481]: time="2026-03-04T01:01:56.535684120Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:01:56.536443 containerd[1481]: time="2026-03-04T01:01:56.536117896Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:01:56.536443 containerd[1481]: time="2026-03-04T01:01:56.536133807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:01:56.537067 containerd[1481]: time="2026-03-04T01:01:56.536465380Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:01:56.541895 containerd[1481]: time="2026-03-04T01:01:56.541092254Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:01:56.541895 containerd[1481]: time="2026-03-04T01:01:56.541146658Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:01:56.541895 containerd[1481]: time="2026-03-04T01:01:56.541169511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:01:56.541895 containerd[1481]: time="2026-03-04T01:01:56.541422353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:01:56.566769 containerd[1481]: time="2026-03-04T01:01:56.558303669Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:01:56.566769 containerd[1481]: time="2026-03-04T01:01:56.558451440Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:01:56.566769 containerd[1481]: time="2026-03-04T01:01:56.559231288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:01:56.566769 containerd[1481]: time="2026-03-04T01:01:56.559580764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:01:56.668677 systemd[1]: Started cri-containerd-085e1abce39bbf269153b660d7e46aca5a09396196c2a30f66806ed20f6f9a4f.scope - libcontainer container 085e1abce39bbf269153b660d7e46aca5a09396196c2a30f66806ed20f6f9a4f. Mar 4 01:01:56.676131 systemd[1]: Started cri-containerd-97e14297661aa50963a73862588c24d06ea2ae23821c7736a7446ce7456c3bfe.scope - libcontainer container 97e14297661aa50963a73862588c24d06ea2ae23821c7736a7446ce7456c3bfe. Mar 4 01:01:56.997903 systemd[1]: Started cri-containerd-f4fc33cbc10345ff9e78e5dad695f3e357e3bb5fdf9179d8434057ee4b0a82fd.scope - libcontainer container f4fc33cbc10345ff9e78e5dad695f3e357e3bb5fdf9179d8434057ee4b0a82fd. Mar 4 01:01:57.097857 containerd[1481]: time="2026-03-04T01:01:57.097775293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:de67561edd222a7cc93975407ca4873e,Namespace:kube-system,Attempt:0,} returns sandbox id \"085e1abce39bbf269153b660d7e46aca5a09396196c2a30f66806ed20f6f9a4f\"" Mar 4 01:01:57.099615 kubelet[2204]: E0304 01:01:57.099570 2204 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:57.109273 containerd[1481]: time="2026-03-04T01:01:57.109182365Z" level=info msg="CreateContainer within sandbox \"085e1abce39bbf269153b660d7e46aca5a09396196c2a30f66806ed20f6f9a4f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 4 01:01:57.115927 containerd[1481]: time="2026-03-04T01:01:57.115783898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,} returns sandbox id \"f4fc33cbc10345ff9e78e5dad695f3e357e3bb5fdf9179d8434057ee4b0a82fd\"" Mar 4 01:01:57.116706 kubelet[2204]: E0304 01:01:57.116678 2204 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:57.124250 containerd[1481]: time="2026-03-04T01:01:57.124109627Z" level=info msg="CreateContainer within sandbox \"f4fc33cbc10345ff9e78e5dad695f3e357e3bb5fdf9179d8434057ee4b0a82fd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 4 01:01:57.127768 containerd[1481]: time="2026-03-04T01:01:57.127279609Z" level=info msg="CreateContainer within sandbox \"085e1abce39bbf269153b660d7e46aca5a09396196c2a30f66806ed20f6f9a4f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7bd6001fda2971565c4e6b182d05d7201730f68b19697758085b3b00a3e1e7f0\"" Mar 4 01:01:57.128657 containerd[1481]: time="2026-03-04T01:01:57.128578305Z" level=info msg="StartContainer for \"7bd6001fda2971565c4e6b182d05d7201730f68b19697758085b3b00a3e1e7f0\"" Mar 4 01:01:57.145603 containerd[1481]: time="2026-03-04T01:01:57.145547802Z" level=info msg="CreateContainer within sandbox \"f4fc33cbc10345ff9e78e5dad695f3e357e3bb5fdf9179d8434057ee4b0a82fd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"db49500a90376ad05230aebec6eff24f4b6f3586606063e74d4e43682dae9145\"" Mar 4 01:01:57.146999 containerd[1481]: time="2026-03-04T01:01:57.146510926Z" level=info msg="StartContainer for \"db49500a90376ad05230aebec6eff24f4b6f3586606063e74d4e43682dae9145\"" Mar 4 01:01:57.154860 containerd[1481]: time="2026-03-04T01:01:57.154772533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"97e14297661aa50963a73862588c24d06ea2ae23821c7736a7446ce7456c3bfe\"" Mar 4 01:01:57.157212 kubelet[2204]: E0304 01:01:57.157164 2204 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:57.164774 containerd[1481]: time="2026-03-04T01:01:57.164742276Z" level=info msg="CreateContainer within sandbox \"97e14297661aa50963a73862588c24d06ea2ae23821c7736a7446ce7456c3bfe\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 4 01:01:57.187311 containerd[1481]: time="2026-03-04T01:01:57.187059517Z" level=info msg="CreateContainer within sandbox \"97e14297661aa50963a73862588c24d06ea2ae23821c7736a7446ce7456c3bfe\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5384bae4caa6a8bb3eede8e68d2ad0a0b3b2239c22158eb53edae52eaf8ab771\"" Mar 4 01:01:57.193813 containerd[1481]: time="2026-03-04T01:01:57.193782017Z" level=info msg="StartContainer for \"5384bae4caa6a8bb3eede8e68d2ad0a0b3b2239c22158eb53edae52eaf8ab771\"" Mar 4 01:01:57.204663 systemd[1]: Started cri-containerd-7bd6001fda2971565c4e6b182d05d7201730f68b19697758085b3b00a3e1e7f0.scope - libcontainer container 7bd6001fda2971565c4e6b182d05d7201730f68b19697758085b3b00a3e1e7f0. Mar 4 01:01:57.274826 systemd[1]: Started cri-containerd-db49500a90376ad05230aebec6eff24f4b6f3586606063e74d4e43682dae9145.scope - libcontainer container db49500a90376ad05230aebec6eff24f4b6f3586606063e74d4e43682dae9145. Mar 4 01:01:57.675578 kubelet[2204]: E0304 01:01:57.670094 2204 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="3.2s" Mar 4 01:01:57.675578 kubelet[2204]: E0304 01:01:57.670276 2204 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 4 01:01:57.675578 kubelet[2204]: E0304 01:01:57.670562 2204 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 4 01:01:57.674056 systemd[1]: run-containerd-runc-k8s.io-97e14297661aa50963a73862588c24d06ea2ae23821c7736a7446ce7456c3bfe-runc.Qzqw2T.mount: Deactivated successfully. Mar 4 01:01:57.698874 systemd[1]: Started cri-containerd-5384bae4caa6a8bb3eede8e68d2ad0a0b3b2239c22158eb53edae52eaf8ab771.scope - libcontainer container 5384bae4caa6a8bb3eede8e68d2ad0a0b3b2239c22158eb53edae52eaf8ab771. Mar 4 01:01:58.019730 kubelet[2204]: E0304 01:01:58.018065 2204 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 4 01:01:58.056850 kubelet[2204]: E0304 01:01:58.056746 2204 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 4 01:01:58.323791 kubelet[2204]: I0304 01:01:58.320258 2204 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 4 01:01:58.323791 kubelet[2204]: E0304 01:01:58.323151 2204 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" Mar 4 01:01:58.369129 containerd[1481]: time="2026-03-04T01:01:58.367972406Z" level=info msg="StartContainer for \"db49500a90376ad05230aebec6eff24f4b6f3586606063e74d4e43682dae9145\" returns successfully" Mar 4 01:01:58.380664 containerd[1481]: time="2026-03-04T01:01:58.380103331Z" level=info msg="StartContainer for \"7bd6001fda2971565c4e6b182d05d7201730f68b19697758085b3b00a3e1e7f0\" returns successfully" Mar 4 01:01:58.424034 containerd[1481]: time="2026-03-04T01:01:58.423867683Z" level=info msg="StartContainer for \"5384bae4caa6a8bb3eede8e68d2ad0a0b3b2239c22158eb53edae52eaf8ab771\" returns successfully" Mar 4 01:01:58.724401 kubelet[2204]: E0304 01:01:58.723963 2204 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:01:58.726005 kubelet[2204]: E0304 01:01:58.725726 2204 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:58.727908 kubelet[2204]: E0304 01:01:58.727795 2204 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:01:58.728263 kubelet[2204]: E0304 01:01:58.728070 2204 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:58.734124 kubelet[2204]: E0304 01:01:58.732670 2204 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:01:58.734124 kubelet[2204]: E0304 01:01:58.732848 2204 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:59.845623 kubelet[2204]: E0304 01:01:59.845325 2204 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:01:59.846902 kubelet[2204]: E0304 01:01:59.845892 2204 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:59.849400 kubelet[2204]: E0304 01:01:59.847512 2204 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:01:59.849400 kubelet[2204]: E0304 01:01:59.847606 2204 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:01:59.849400 kubelet[2204]: E0304 01:01:59.848197 2204 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:01:59.849400 kubelet[2204]: E0304 01:01:59.848312 2204 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:00.996541 kubelet[2204]: E0304 01:02:00.996412 2204 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:02:00.997817 kubelet[2204]: E0304 01:02:00.996859 2204 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:01.002320 kubelet[2204]: E0304 01:02:01.001836 2204 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:02:01.002320 kubelet[2204]: E0304 01:02:01.001986 2204 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:01.002320 kubelet[2204]: E0304 01:02:01.002177 2204 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:02:01.002320 kubelet[2204]: E0304 01:02:01.002265 2204 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:02.179479 kubelet[2204]: I0304 01:02:02.179155 2204 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 4 01:02:03.081533 kubelet[2204]: E0304 01:02:03.081143 2204 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:02:03.081533 kubelet[2204]: E0304 01:02:03.081586 2204 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:04.636798 kubelet[2204]: E0304 01:02:04.633236 2204 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 4 01:02:05.094937 kubelet[2204]: E0304 01:02:05.094878 2204 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 4 01:02:05.174069 kubelet[2204]: I0304 01:02:05.173960 2204 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 4 01:02:05.179126 kubelet[2204]: I0304 01:02:05.179088 2204 apiserver.go:52] "Watching apiserver" Mar 4 01:02:05.268213 kubelet[2204]: I0304 01:02:05.252152 2204 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 4 01:02:05.396111 kubelet[2204]: I0304 01:02:05.387156 2204 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 4 01:02:05.429873 kubelet[2204]: E0304 01:02:05.429756 2204 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 4 01:02:05.429873 kubelet[2204]: I0304 01:02:05.429855 2204 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 4 01:02:05.433263 kubelet[2204]: E0304 01:02:05.433109 2204 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 4 01:02:05.433515 kubelet[2204]: I0304 01:02:05.433284 2204 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 4 01:02:05.436253 kubelet[2204]: E0304 01:02:05.436161 2204 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 4 01:02:08.389996 kubelet[2204]: I0304 01:02:08.381687 2204 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 4 01:02:08.413147 kubelet[2204]: E0304 01:02:08.413096 2204 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:08.433587 kubelet[2204]: E0304 01:02:08.432902 2204 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:10.647751 kubelet[2204]: I0304 01:02:10.640909 2204 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 4 01:02:10.685175 kubelet[2204]: E0304 01:02:10.685034 2204 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:10.807614 kubelet[2204]: I0304 01:02:10.805170 2204 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.805097375 podStartE2EDuration="2.805097375s" podCreationTimestamp="2026-03-04 01:02:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 01:02:10.761684016 +0000 UTC m=+18.640681634" watchObservedRunningTime="2026-03-04 01:02:10.805097375 +0000 UTC m=+18.684094972" Mar 4 01:02:10.938613 systemd[1]: Reloading requested from client PID 2496 ('systemctl') (unit session-7.scope)... Mar 4 01:02:10.938644 systemd[1]: Reloading... Mar 4 01:02:11.562445 zram_generator::config[2544]: No configuration found. Mar 4 01:02:11.649332 kubelet[2204]: E0304 01:02:11.649260 2204 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:12.187272 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 01:02:12.581045 systemd[1]: Reloading finished in 1639 ms. Mar 4 01:02:12.964716 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:02:13.003128 systemd[1]: kubelet.service: Deactivated successfully. Mar 4 01:02:13.007678 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:02:13.007905 systemd[1]: kubelet.service: Consumed 8.303s CPU time, 128.9M memory peak, 0B memory swap peak. Mar 4 01:02:13.056931 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:02:13.915957 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:02:13.971020 (kubelet)[2579]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 4 01:02:14.283215 kubelet[2579]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 4 01:02:14.283215 kubelet[2579]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 4 01:02:14.286146 kubelet[2579]: I0304 01:02:14.284721 2579 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 4 01:02:14.314666 kubelet[2579]: I0304 01:02:14.313300 2579 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 4 01:02:14.316312 kubelet[2579]: I0304 01:02:14.315063 2579 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 4 01:02:14.316983 kubelet[2579]: I0304 01:02:14.316709 2579 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 4 01:02:14.316983 kubelet[2579]: I0304 01:02:14.316774 2579 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 4 01:02:14.317906 kubelet[2579]: I0304 01:02:14.317181 2579 server.go:956] "Client rotation is on, will bootstrap in background" Mar 4 01:02:14.324535 kubelet[2579]: I0304 01:02:14.324409 2579 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 4 01:02:14.330270 kubelet[2579]: I0304 01:02:14.329972 2579 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 4 01:02:14.377897 kubelet[2579]: E0304 01:02:14.377602 2579 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 4 01:02:14.377897 kubelet[2579]: I0304 01:02:14.377771 2579 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 4 01:02:14.398541 kubelet[2579]: I0304 01:02:14.396141 2579 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 4 01:02:14.402189 kubelet[2579]: I0304 01:02:14.400926 2579 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 4 01:02:14.402189 kubelet[2579]: I0304 01:02:14.401005 2579 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 4 01:02:14.402189 kubelet[2579]: I0304 01:02:14.401245 2579 topology_manager.go:138] "Creating topology manager with none policy" Mar 4 01:02:14.402189 kubelet[2579]: I0304 01:02:14.401262 2579 container_manager_linux.go:306] "Creating device plugin manager" Mar 4 01:02:14.403054 kubelet[2579]: I0304 01:02:14.401310 2579 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 4 01:02:14.403054 kubelet[2579]: I0304 01:02:14.403051 2579 state_mem.go:36] "Initialized new in-memory state store" Mar 4 01:02:14.408826 kubelet[2579]: I0304 01:02:14.406816 2579 kubelet.go:475] "Attempting to sync node with API server" Mar 4 01:02:14.408826 kubelet[2579]: I0304 01:02:14.407054 2579 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 4 01:02:14.412571 kubelet[2579]: I0304 01:02:14.410041 2579 kubelet.go:387] "Adding apiserver pod source" Mar 4 01:02:14.412571 kubelet[2579]: I0304 01:02:14.410070 2579 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 4 01:02:14.419563 kubelet[2579]: I0304 01:02:14.419279 2579 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 4 01:02:14.420277 kubelet[2579]: I0304 01:02:14.420119 2579 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 4 01:02:14.420277 kubelet[2579]: I0304 01:02:14.420251 2579 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 4 01:02:14.442516 kubelet[2579]: I0304 01:02:14.442324 2579 server.go:1262] "Started kubelet" Mar 4 01:02:14.454903 kubelet[2579]: I0304 01:02:14.446997 2579 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 4 01:02:14.454903 kubelet[2579]: I0304 01:02:14.447915 2579 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 4 01:02:14.454903 kubelet[2579]: I0304 01:02:14.454042 2579 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 4 01:02:14.454903 kubelet[2579]: I0304 01:02:14.454615 2579 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 4 01:02:14.455692 kubelet[2579]: I0304 01:02:14.455322 2579 server.go:310] "Adding debug handlers to kubelet server" Mar 4 01:02:14.465488 kubelet[2579]: I0304 01:02:14.465322 2579 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 4 01:02:14.466886 kubelet[2579]: I0304 01:02:14.466803 2579 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 4 01:02:14.467911 kubelet[2579]: I0304 01:02:14.467843 2579 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 4 01:02:14.469496 kubelet[2579]: I0304 01:02:14.469294 2579 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 4 01:02:14.477200 kubelet[2579]: I0304 01:02:14.476918 2579 reconciler.go:29] "Reconciler: start to sync state" Mar 4 01:02:14.519175 kubelet[2579]: I0304 01:02:14.514898 2579 factory.go:223] Registration of the systemd container factory successfully Mar 4 01:02:14.519175 kubelet[2579]: I0304 01:02:14.515261 2579 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 4 01:02:14.523159 kubelet[2579]: E0304 01:02:14.522605 2579 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 4 01:02:14.561280 kubelet[2579]: I0304 01:02:14.556109 2579 factory.go:223] Registration of the containerd container factory successfully Mar 4 01:02:14.612541 kubelet[2579]: I0304 01:02:14.612236 2579 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 4 01:02:14.618843 kubelet[2579]: I0304 01:02:14.618182 2579 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 4 01:02:14.619337 kubelet[2579]: I0304 01:02:14.619117 2579 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 4 01:02:14.620321 kubelet[2579]: I0304 01:02:14.620072 2579 kubelet.go:2428] "Starting kubelet main sync loop" Mar 4 01:02:14.620321 kubelet[2579]: E0304 01:02:14.620147 2579 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 4 01:02:14.720423 kubelet[2579]: I0304 01:02:14.719553 2579 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 4 01:02:14.720423 kubelet[2579]: I0304 01:02:14.719590 2579 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 4 01:02:14.720423 kubelet[2579]: I0304 01:02:14.719627 2579 state_mem.go:36] "Initialized new in-memory state store" Mar 4 01:02:14.720423 kubelet[2579]: I0304 01:02:14.720252 2579 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 4 01:02:14.720423 kubelet[2579]: I0304 01:02:14.720280 2579 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 4 01:02:14.720423 kubelet[2579]: I0304 01:02:14.720313 2579 policy_none.go:49] "None policy: Start" Mar 4 01:02:14.720423 kubelet[2579]: I0304 01:02:14.720328 2579 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 4 01:02:14.720935 kubelet[2579]: E0304 01:02:14.720479 2579 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 4 01:02:14.720935 kubelet[2579]: I0304 01:02:14.720514 2579 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 4 01:02:14.721533 kubelet[2579]: I0304 01:02:14.721224 2579 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 4 01:02:14.721533 kubelet[2579]: I0304 01:02:14.721329 2579 policy_none.go:47] "Start" Mar 4 01:02:14.751023 kubelet[2579]: E0304 01:02:14.750986 2579 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 4 01:02:14.753881 kubelet[2579]: I0304 01:02:14.752065 2579 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 4 01:02:14.756007 kubelet[2579]: I0304 01:02:14.753582 2579 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 4 01:02:14.757778 kubelet[2579]: I0304 01:02:14.756533 2579 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 4 01:02:14.758062 kubelet[2579]: E0304 01:02:14.756971 2579 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 4 01:02:14.918517 kubelet[2579]: I0304 01:02:14.917987 2579 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 4 01:02:14.926334 kubelet[2579]: I0304 01:02:14.926229 2579 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 4 01:02:14.931137 kubelet[2579]: I0304 01:02:14.929990 2579 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 4 01:02:14.937423 kubelet[2579]: I0304 01:02:14.937314 2579 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 4 01:02:14.972749 kubelet[2579]: E0304 01:02:14.969838 2579 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 4 01:02:14.976907 kubelet[2579]: E0304 01:02:14.976230 2579 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 4 01:02:14.977121 kubelet[2579]: I0304 01:02:14.977052 2579 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 4 01:02:14.977247 kubelet[2579]: I0304 01:02:14.977179 2579 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 4 01:02:14.985279 kubelet[2579]: I0304 01:02:14.985158 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/de67561edd222a7cc93975407ca4873e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"de67561edd222a7cc93975407ca4873e\") " pod="kube-system/kube-apiserver-localhost" Mar 4 01:02:14.985279 kubelet[2579]: I0304 01:02:14.985269 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/de67561edd222a7cc93975407ca4873e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"de67561edd222a7cc93975407ca4873e\") " pod="kube-system/kube-apiserver-localhost" Mar 4 01:02:14.985849 kubelet[2579]: I0304 01:02:14.985306 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:02:14.985849 kubelet[2579]: I0304 01:02:14.985330 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:02:14.985849 kubelet[2579]: I0304 01:02:14.985627 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:02:14.985849 kubelet[2579]: I0304 01:02:14.985738 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/de67561edd222a7cc93975407ca4873e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"de67561edd222a7cc93975407ca4873e\") " pod="kube-system/kube-apiserver-localhost" Mar 4 01:02:14.985849 kubelet[2579]: I0304 01:02:14.985775 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:02:14.986035 kubelet[2579]: I0304 01:02:14.985799 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:02:14.986035 kubelet[2579]: I0304 01:02:14.985828 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 4 01:02:15.263912 kubelet[2579]: E0304 01:02:15.262305 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:15.272812 kubelet[2579]: E0304 01:02:15.271851 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:15.279325 kubelet[2579]: E0304 01:02:15.279192 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:15.411342 kubelet[2579]: I0304 01:02:15.411016 2579 apiserver.go:52] "Watching apiserver" Mar 4 01:02:15.502815 kubelet[2579]: I0304 01:02:15.485590 2579 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 4 01:02:15.579502 kubelet[2579]: I0304 01:02:15.574989 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.574845949 podStartE2EDuration="1.574845949s" podCreationTimestamp="2026-03-04 01:02:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 01:02:15.573756302 +0000 UTC m=+1.545818114" watchObservedRunningTime="2026-03-04 01:02:15.574845949 +0000 UTC m=+1.546907742" Mar 4 01:02:15.702445 kubelet[2579]: E0304 01:02:15.698030 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:15.702445 kubelet[2579]: E0304 01:02:15.698775 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:15.702445 kubelet[2579]: I0304 01:02:15.699005 2579 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 4 01:02:15.836880 kubelet[2579]: E0304 01:02:15.835809 2579 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 4 01:02:15.836880 kubelet[2579]: E0304 01:02:15.836495 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:15.925410 kubelet[2579]: I0304 01:02:15.925247 2579 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 4 01:02:15.926954 containerd[1481]: time="2026-03-04T01:02:15.926854211Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 4 01:02:16.011558 kubelet[2579]: I0304 01:02:16.011068 2579 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 4 01:02:16.705702 kubelet[2579]: E0304 01:02:16.704057 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:16.705702 kubelet[2579]: E0304 01:02:16.706270 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:16.768486 kubelet[2579]: E0304 01:02:16.764874 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:17.679569 kubelet[2579]: I0304 01:02:17.555087 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ee7b652d-0f20-4153-9d2b-9faa3b9a1997-kube-proxy\") pod \"kube-proxy-6zbjm\" (UID: \"ee7b652d-0f20-4153-9d2b-9faa3b9a1997\") " pod="kube-system/kube-proxy-6zbjm" Mar 4 01:02:17.679569 kubelet[2579]: I0304 01:02:17.614100 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxdhc\" (UniqueName: \"kubernetes.io/projected/ee7b652d-0f20-4153-9d2b-9faa3b9a1997-kube-api-access-lxdhc\") pod \"kube-proxy-6zbjm\" (UID: \"ee7b652d-0f20-4153-9d2b-9faa3b9a1997\") " pod="kube-system/kube-proxy-6zbjm" Mar 4 01:02:18.520913 systemd[1]: Created slice kubepods-besteffort-podee7b652d_0f20_4153_9d2b_9faa3b9a1997.slice - libcontainer container kubepods-besteffort-podee7b652d_0f20_4153_9d2b_9faa3b9a1997.slice. Mar 4 01:02:18.522738 kubelet[2579]: I0304 01:02:18.521336 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ee7b652d-0f20-4153-9d2b-9faa3b9a1997-xtables-lock\") pod \"kube-proxy-6zbjm\" (UID: \"ee7b652d-0f20-4153-9d2b-9faa3b9a1997\") " pod="kube-system/kube-proxy-6zbjm" Mar 4 01:02:18.522738 kubelet[2579]: I0304 01:02:18.521580 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ee7b652d-0f20-4153-9d2b-9faa3b9a1997-lib-modules\") pod \"kube-proxy-6zbjm\" (UID: \"ee7b652d-0f20-4153-9d2b-9faa3b9a1997\") " pod="kube-system/kube-proxy-6zbjm" Mar 4 01:02:18.867447 kubelet[2579]: E0304 01:02:18.865113 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:18.873463 containerd[1481]: time="2026-03-04T01:02:18.873159794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6zbjm,Uid:ee7b652d-0f20-4153-9d2b-9faa3b9a1997,Namespace:kube-system,Attempt:0,}" Mar 4 01:02:19.259601 containerd[1481]: time="2026-03-04T01:02:19.258733715Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:02:19.259601 containerd[1481]: time="2026-03-04T01:02:19.258974699Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:02:19.259601 containerd[1481]: time="2026-03-04T01:02:19.259002924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:02:19.259601 containerd[1481]: time="2026-03-04T01:02:19.259207551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:02:19.741065 systemd[1]: Started cri-containerd-d2b41ff3d0c7206688eb9aeaf9b239063a016695260403ae81f0c2f4ad56f3b2.scope - libcontainer container d2b41ff3d0c7206688eb9aeaf9b239063a016695260403ae81f0c2f4ad56f3b2. Mar 4 01:02:20.337062 containerd[1481]: time="2026-03-04T01:02:20.336811244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6zbjm,Uid:ee7b652d-0f20-4153-9d2b-9faa3b9a1997,Namespace:kube-system,Attempt:0,} returns sandbox id \"d2b41ff3d0c7206688eb9aeaf9b239063a016695260403ae81f0c2f4ad56f3b2\"" Mar 4 01:02:20.349801 kubelet[2579]: E0304 01:02:20.349668 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:20.403990 containerd[1481]: time="2026-03-04T01:02:20.403616653Z" level=info msg="CreateContainer within sandbox \"d2b41ff3d0c7206688eb9aeaf9b239063a016695260403ae81f0c2f4ad56f3b2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 4 01:02:20.739090 containerd[1481]: time="2026-03-04T01:02:20.737021891Z" level=info msg="CreateContainer within sandbox \"d2b41ff3d0c7206688eb9aeaf9b239063a016695260403ae81f0c2f4ad56f3b2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bfa4a6574ff706e0051eb2bf3e3c58fb81712a9f8b7b353633a1439036ac9add\"" Mar 4 01:02:20.750626 containerd[1481]: time="2026-03-04T01:02:20.745156392Z" level=info msg="StartContainer for \"bfa4a6574ff706e0051eb2bf3e3c58fb81712a9f8b7b353633a1439036ac9add\"" Mar 4 01:02:20.984580 systemd[1]: Started cri-containerd-bfa4a6574ff706e0051eb2bf3e3c58fb81712a9f8b7b353633a1439036ac9add.scope - libcontainer container bfa4a6574ff706e0051eb2bf3e3c58fb81712a9f8b7b353633a1439036ac9add. Mar 4 01:02:21.003459 kubelet[2579]: I0304 01:02:21.000018 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/aaeea65b-b102-45df-b8fe-9cc542739335-var-lib-calico\") pod \"tigera-operator-5588576f44-82t5g\" (UID: \"aaeea65b-b102-45df-b8fe-9cc542739335\") " pod="tigera-operator/tigera-operator-5588576f44-82t5g" Mar 4 01:02:21.003459 kubelet[2579]: I0304 01:02:21.000069 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jpxr\" (UniqueName: \"kubernetes.io/projected/aaeea65b-b102-45df-b8fe-9cc542739335-kube-api-access-8jpxr\") pod \"tigera-operator-5588576f44-82t5g\" (UID: \"aaeea65b-b102-45df-b8fe-9cc542739335\") " pod="tigera-operator/tigera-operator-5588576f44-82t5g" Mar 4 01:02:21.059647 systemd[1]: Created slice kubepods-besteffort-podaaeea65b_b102_45df_b8fe_9cc542739335.slice - libcontainer container kubepods-besteffort-podaaeea65b_b102_45df_b8fe_9cc542739335.slice. Mar 4 01:02:21.338319 containerd[1481]: time="2026-03-04T01:02:21.335241200Z" level=info msg="StartContainer for \"bfa4a6574ff706e0051eb2bf3e3c58fb81712a9f8b7b353633a1439036ac9add\" returns successfully" Mar 4 01:02:21.378842 containerd[1481]: time="2026-03-04T01:02:21.377564993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-82t5g,Uid:aaeea65b-b102-45df-b8fe-9cc542739335,Namespace:tigera-operator,Attempt:0,}" Mar 4 01:02:21.937278 kubelet[2579]: E0304 01:02:21.935208 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:22.008707 containerd[1481]: time="2026-03-04T01:02:21.989571065Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:02:22.008707 containerd[1481]: time="2026-03-04T01:02:21.989661005Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:02:22.008707 containerd[1481]: time="2026-03-04T01:02:21.989713484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:02:22.008707 containerd[1481]: time="2026-03-04T01:02:21.989849040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:02:22.062741 systemd[1]: Started cri-containerd-125d4c5d0be0802dcbcf5dd0af95e8a5922e4f8ef703b30a411f3e85bd2ac119.scope - libcontainer container 125d4c5d0be0802dcbcf5dd0af95e8a5922e4f8ef703b30a411f3e85bd2ac119. Mar 4 01:02:22.071306 kubelet[2579]: E0304 01:02:22.071095 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:22.146653 kubelet[2579]: I0304 01:02:22.146523 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6zbjm" podStartSLOduration=6.146495517 podStartE2EDuration="6.146495517s" podCreationTimestamp="2026-03-04 01:02:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 01:02:22.005712741 +0000 UTC m=+7.977774572" watchObservedRunningTime="2026-03-04 01:02:22.146495517 +0000 UTC m=+8.118557309" Mar 4 01:02:22.755205 kubelet[2579]: E0304 01:02:22.755157 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:22.784636 containerd[1481]: time="2026-03-04T01:02:22.784575215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-82t5g,Uid:aaeea65b-b102-45df-b8fe-9cc542739335,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"125d4c5d0be0802dcbcf5dd0af95e8a5922e4f8ef703b30a411f3e85bd2ac119\"" Mar 4 01:02:22.794309 containerd[1481]: time="2026-03-04T01:02:22.793970348Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 4 01:02:22.944279 kubelet[2579]: E0304 01:02:22.944187 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:22.949430 kubelet[2579]: E0304 01:02:22.946791 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:22.949430 kubelet[2579]: E0304 01:02:22.947248 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:23.949256 kubelet[2579]: E0304 01:02:23.948682 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:24.063099 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1590321828.mount: Deactivated successfully. Mar 4 01:02:25.523027 kubelet[2579]: E0304 01:02:25.521859 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:35.434774 containerd[1481]: time="2026-03-04T01:02:35.434639881Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:02:35.439656 containerd[1481]: time="2026-03-04T01:02:35.438295015Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 4 01:02:35.445831 containerd[1481]: time="2026-03-04T01:02:35.445558877Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:02:35.459556 containerd[1481]: time="2026-03-04T01:02:35.459322224Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:02:35.463600 containerd[1481]: time="2026-03-04T01:02:35.460340220Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 12.666318334s" Mar 4 01:02:35.463600 containerd[1481]: time="2026-03-04T01:02:35.460459925Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 4 01:02:35.481534 containerd[1481]: time="2026-03-04T01:02:35.481169480Z" level=info msg="CreateContainer within sandbox \"125d4c5d0be0802dcbcf5dd0af95e8a5922e4f8ef703b30a411f3e85bd2ac119\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 4 01:02:35.532180 containerd[1481]: time="2026-03-04T01:02:35.532090270Z" level=info msg="CreateContainer within sandbox \"125d4c5d0be0802dcbcf5dd0af95e8a5922e4f8ef703b30a411f3e85bd2ac119\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f82c38b75abfa64c9eec28bd6f17932cb13b426745d08f6110399178be22dd3e\"" Mar 4 01:02:35.538573 containerd[1481]: time="2026-03-04T01:02:35.535762111Z" level=info msg="StartContainer for \"f82c38b75abfa64c9eec28bd6f17932cb13b426745d08f6110399178be22dd3e\"" Mar 4 01:02:35.664976 systemd[1]: Started cri-containerd-f82c38b75abfa64c9eec28bd6f17932cb13b426745d08f6110399178be22dd3e.scope - libcontainer container f82c38b75abfa64c9eec28bd6f17932cb13b426745d08f6110399178be22dd3e. Mar 4 01:02:36.535957 containerd[1481]: time="2026-03-04T01:02:36.529611603Z" level=info msg="StartContainer for \"f82c38b75abfa64c9eec28bd6f17932cb13b426745d08f6110399178be22dd3e\" returns successfully" Mar 4 01:02:37.417145 kubelet[2579]: I0304 01:02:37.416875 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-82t5g" podStartSLOduration=4.741453841 podStartE2EDuration="17.416852428s" podCreationTimestamp="2026-03-04 01:02:20 +0000 UTC" firstStartedPulling="2026-03-04 01:02:22.791135064 +0000 UTC m=+8.763196876" lastFinishedPulling="2026-03-04 01:02:35.466533631 +0000 UTC m=+21.438595463" observedRunningTime="2026-03-04 01:02:37.4151476 +0000 UTC m=+23.387209432" watchObservedRunningTime="2026-03-04 01:02:37.416852428 +0000 UTC m=+23.388914221" Mar 4 01:02:46.068083 sudo[1643]: pam_unix(sudo:session): session closed for user root Mar 4 01:02:46.082160 sshd[1640]: pam_unix(sshd:session): session closed for user core Mar 4 01:02:46.113586 systemd[1]: sshd@6-10.0.0.50:22-10.0.0.1:43930.service: Deactivated successfully. Mar 4 01:02:46.127163 systemd[1]: session-7.scope: Deactivated successfully. Mar 4 01:02:46.134409 systemd[1]: session-7.scope: Consumed 24.135s CPU time, 164.1M memory peak, 0B memory swap peak. Mar 4 01:02:46.145063 systemd-logind[1450]: Session 7 logged out. Waiting for processes to exit. Mar 4 01:02:46.163643 systemd-logind[1450]: Removed session 7. Mar 4 01:02:50.971557 systemd[1]: Created slice kubepods-besteffort-pod8304a1b6_8f25_47ba_aa81_07dfbfbd6bed.slice - libcontainer container kubepods-besteffort-pod8304a1b6_8f25_47ba_aa81_07dfbfbd6bed.slice. Mar 4 01:02:51.033868 kubelet[2579]: I0304 01:02:51.033681 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25mdd\" (UniqueName: \"kubernetes.io/projected/8304a1b6-8f25-47ba-aa81-07dfbfbd6bed-kube-api-access-25mdd\") pod \"calico-typha-5684f4c75c-mw9kq\" (UID: \"8304a1b6-8f25-47ba-aa81-07dfbfbd6bed\") " pod="calico-system/calico-typha-5684f4c75c-mw9kq" Mar 4 01:02:51.033868 kubelet[2579]: I0304 01:02:51.033800 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8304a1b6-8f25-47ba-aa81-07dfbfbd6bed-typha-certs\") pod \"calico-typha-5684f4c75c-mw9kq\" (UID: \"8304a1b6-8f25-47ba-aa81-07dfbfbd6bed\") " pod="calico-system/calico-typha-5684f4c75c-mw9kq" Mar 4 01:02:51.033868 kubelet[2579]: I0304 01:02:51.033836 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8304a1b6-8f25-47ba-aa81-07dfbfbd6bed-tigera-ca-bundle\") pod \"calico-typha-5684f4c75c-mw9kq\" (UID: \"8304a1b6-8f25-47ba-aa81-07dfbfbd6bed\") " pod="calico-system/calico-typha-5684f4c75c-mw9kq" Mar 4 01:02:51.058892 systemd[1]: Created slice kubepods-besteffort-pod8adce66e_6575_463f_97f2_a0dec48c7611.slice - libcontainer container kubepods-besteffort-pod8adce66e_6575_463f_97f2_a0dec48c7611.slice. Mar 4 01:02:51.134809 kubelet[2579]: I0304 01:02:51.134561 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8adce66e-6575-463f-97f2-a0dec48c7611-flexvol-driver-host\") pod \"calico-node-44r5c\" (UID: \"8adce66e-6575-463f-97f2-a0dec48c7611\") " pod="calico-system/calico-node-44r5c" Mar 4 01:02:51.134809 kubelet[2579]: I0304 01:02:51.134728 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/8adce66e-6575-463f-97f2-a0dec48c7611-sys-fs\") pod \"calico-node-44r5c\" (UID: \"8adce66e-6575-463f-97f2-a0dec48c7611\") " pod="calico-system/calico-node-44r5c" Mar 4 01:02:51.134809 kubelet[2579]: I0304 01:02:51.134758 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8adce66e-6575-463f-97f2-a0dec48c7611-tigera-ca-bundle\") pod \"calico-node-44r5c\" (UID: \"8adce66e-6575-463f-97f2-a0dec48c7611\") " pod="calico-system/calico-node-44r5c" Mar 4 01:02:51.134809 kubelet[2579]: I0304 01:02:51.134785 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8adce66e-6575-463f-97f2-a0dec48c7611-var-run-calico\") pod \"calico-node-44r5c\" (UID: \"8adce66e-6575-463f-97f2-a0dec48c7611\") " pod="calico-system/calico-node-44r5c" Mar 4 01:02:51.135341 kubelet[2579]: I0304 01:02:51.134815 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8adce66e-6575-463f-97f2-a0dec48c7611-lib-modules\") pod \"calico-node-44r5c\" (UID: \"8adce66e-6575-463f-97f2-a0dec48c7611\") " pod="calico-system/calico-node-44r5c" Mar 4 01:02:51.135341 kubelet[2579]: I0304 01:02:51.134940 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8adce66e-6575-463f-97f2-a0dec48c7611-var-lib-calico\") pod \"calico-node-44r5c\" (UID: \"8adce66e-6575-463f-97f2-a0dec48c7611\") " pod="calico-system/calico-node-44r5c" Mar 4 01:02:51.135341 kubelet[2579]: I0304 01:02:51.134970 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8adce66e-6575-463f-97f2-a0dec48c7611-cni-bin-dir\") pod \"calico-node-44r5c\" (UID: \"8adce66e-6575-463f-97f2-a0dec48c7611\") " pod="calico-system/calico-node-44r5c" Mar 4 01:02:51.135341 kubelet[2579]: I0304 01:02:51.134996 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8adce66e-6575-463f-97f2-a0dec48c7611-cni-log-dir\") pod \"calico-node-44r5c\" (UID: \"8adce66e-6575-463f-97f2-a0dec48c7611\") " pod="calico-system/calico-node-44r5c" Mar 4 01:02:51.135341 kubelet[2579]: I0304 01:02:51.135016 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8adce66e-6575-463f-97f2-a0dec48c7611-xtables-lock\") pod \"calico-node-44r5c\" (UID: \"8adce66e-6575-463f-97f2-a0dec48c7611\") " pod="calico-system/calico-node-44r5c" Mar 4 01:02:51.135745 kubelet[2579]: I0304 01:02:51.135057 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/8adce66e-6575-463f-97f2-a0dec48c7611-nodeproc\") pod \"calico-node-44r5c\" (UID: \"8adce66e-6575-463f-97f2-a0dec48c7611\") " pod="calico-system/calico-node-44r5c" Mar 4 01:02:51.135745 kubelet[2579]: I0304 01:02:51.135082 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88g4j\" (UniqueName: \"kubernetes.io/projected/8adce66e-6575-463f-97f2-a0dec48c7611-kube-api-access-88g4j\") pod \"calico-node-44r5c\" (UID: \"8adce66e-6575-463f-97f2-a0dec48c7611\") " pod="calico-system/calico-node-44r5c" Mar 4 01:02:51.135745 kubelet[2579]: I0304 01:02:51.135108 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8adce66e-6575-463f-97f2-a0dec48c7611-policysync\") pod \"calico-node-44r5c\" (UID: \"8adce66e-6575-463f-97f2-a0dec48c7611\") " pod="calico-system/calico-node-44r5c" Mar 4 01:02:51.135745 kubelet[2579]: I0304 01:02:51.135198 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8adce66e-6575-463f-97f2-a0dec48c7611-cni-net-dir\") pod \"calico-node-44r5c\" (UID: \"8adce66e-6575-463f-97f2-a0dec48c7611\") " pod="calico-system/calico-node-44r5c" Mar 4 01:02:51.135745 kubelet[2579]: I0304 01:02:51.135233 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8adce66e-6575-463f-97f2-a0dec48c7611-node-certs\") pod \"calico-node-44r5c\" (UID: \"8adce66e-6575-463f-97f2-a0dec48c7611\") " pod="calico-system/calico-node-44r5c" Mar 4 01:02:51.135961 kubelet[2579]: I0304 01:02:51.135272 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/8adce66e-6575-463f-97f2-a0dec48c7611-bpffs\") pod \"calico-node-44r5c\" (UID: \"8adce66e-6575-463f-97f2-a0dec48c7611\") " pod="calico-system/calico-node-44r5c" Mar 4 01:02:51.169768 kubelet[2579]: E0304 01:02:51.167086 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rs6f4" podUID="24812854-f0ac-4651-986c-4d61a0df5440" Mar 4 01:02:51.236045 kubelet[2579]: I0304 01:02:51.235725 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/24812854-f0ac-4651-986c-4d61a0df5440-socket-dir\") pod \"csi-node-driver-rs6f4\" (UID: \"24812854-f0ac-4651-986c-4d61a0df5440\") " pod="calico-system/csi-node-driver-rs6f4" Mar 4 01:02:51.236045 kubelet[2579]: I0304 01:02:51.235881 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/24812854-f0ac-4651-986c-4d61a0df5440-kubelet-dir\") pod \"csi-node-driver-rs6f4\" (UID: \"24812854-f0ac-4651-986c-4d61a0df5440\") " pod="calico-system/csi-node-driver-rs6f4" Mar 4 01:02:51.236045 kubelet[2579]: I0304 01:02:51.235909 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7kcw\" (UniqueName: \"kubernetes.io/projected/24812854-f0ac-4651-986c-4d61a0df5440-kube-api-access-b7kcw\") pod \"csi-node-driver-rs6f4\" (UID: \"24812854-f0ac-4651-986c-4d61a0df5440\") " pod="calico-system/csi-node-driver-rs6f4" Mar 4 01:02:51.236045 kubelet[2579]: I0304 01:02:51.235963 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/24812854-f0ac-4651-986c-4d61a0df5440-varrun\") pod \"csi-node-driver-rs6f4\" (UID: \"24812854-f0ac-4651-986c-4d61a0df5440\") " pod="calico-system/csi-node-driver-rs6f4" Mar 4 01:02:51.236045 kubelet[2579]: I0304 01:02:51.236000 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/24812854-f0ac-4651-986c-4d61a0df5440-registration-dir\") pod \"csi-node-driver-rs6f4\" (UID: \"24812854-f0ac-4651-986c-4d61a0df5440\") " pod="calico-system/csi-node-driver-rs6f4" Mar 4 01:02:51.256961 kubelet[2579]: E0304 01:02:51.255306 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:51.256961 kubelet[2579]: W0304 01:02:51.255725 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:51.256961 kubelet[2579]: E0304 01:02:51.256120 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:51.515908 kubelet[2579]: E0304 01:02:51.513763 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:51.515908 kubelet[2579]: W0304 01:02:51.514523 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:51.515908 kubelet[2579]: E0304 01:02:51.514729 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:51.518944 kubelet[2579]: E0304 01:02:51.516898 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:51.518944 kubelet[2579]: W0304 01:02:51.516922 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:51.518944 kubelet[2579]: E0304 01:02:51.516945 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:51.520729 kubelet[2579]: E0304 01:02:51.519729 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:51.520729 kubelet[2579]: E0304 01:02:51.520225 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:51.520729 kubelet[2579]: W0304 01:02:51.520259 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:51.520729 kubelet[2579]: E0304 01:02:51.520295 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:51.523028 containerd[1481]: time="2026-03-04T01:02:51.522989131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5684f4c75c-mw9kq,Uid:8304a1b6-8f25-47ba-aa81-07dfbfbd6bed,Namespace:calico-system,Attempt:0,}" Mar 4 01:02:51.525476 kubelet[2579]: E0304 01:02:51.523503 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:51.525476 kubelet[2579]: W0304 01:02:51.523535 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:51.525476 kubelet[2579]: E0304 01:02:51.523559 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:51.525476 kubelet[2579]: E0304 01:02:51.524091 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:51.525476 kubelet[2579]: W0304 01:02:51.524105 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:51.525476 kubelet[2579]: E0304 01:02:51.524119 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:51.525476 kubelet[2579]: E0304 01:02:51.524538 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:51.525476 kubelet[2579]: W0304 01:02:51.524551 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:51.525476 kubelet[2579]: E0304 01:02:51.524564 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:51.525476 kubelet[2579]: E0304 01:02:51.524978 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:51.525796 kubelet[2579]: W0304 01:02:51.524993 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:51.525796 kubelet[2579]: E0304 01:02:51.525008 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:51.525796 kubelet[2579]: E0304 01:02:51.525278 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:51.525796 kubelet[2579]: W0304 01:02:51.525289 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:51.525796 kubelet[2579]: E0304 01:02:51.525303 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:51.528945 kubelet[2579]: E0304 01:02:51.528708 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:51.528945 kubelet[2579]: W0304 01:02:51.528789 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:51.528945 kubelet[2579]: E0304 01:02:51.528814 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:51.542124 kubelet[2579]: E0304 01:02:51.541964 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:51.542317 kubelet[2579]: W0304 01:02:51.542135 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:51.544127 kubelet[2579]: E0304 01:02:51.542746 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:51.550239 kubelet[2579]: E0304 01:02:51.550123 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:51.550239 kubelet[2579]: W0304 01:02:51.550197 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:51.550239 kubelet[2579]: E0304 01:02:51.550227 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:51.551325 kubelet[2579]: E0304 01:02:51.550917 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:51.551325 kubelet[2579]: W0304 01:02:51.550979 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:51.551325 kubelet[2579]: E0304 01:02:51.550999 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:51.555119 kubelet[2579]: E0304 01:02:51.553299 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:51.555119 kubelet[2579]: W0304 01:02:51.553471 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:51.555119 kubelet[2579]: E0304 01:02:51.553493 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:51.555119 kubelet[2579]: E0304 01:02:51.554099 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:51.555119 kubelet[2579]: W0304 01:02:51.554112 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:51.555119 kubelet[2579]: E0304 01:02:51.554126 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:51.558400 kubelet[2579]: E0304 01:02:51.557659 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:51.558400 kubelet[2579]: W0304 01:02:51.557834 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:51.558400 kubelet[2579]: E0304 01:02:51.557872 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:51.558490 kubelet[2579]: E0304 01:02:51.558463 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:51.558490 kubelet[2579]: W0304 01:02:51.558478 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:51.558648 kubelet[2579]: E0304 01:02:51.558496 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:51.561813 kubelet[2579]: E0304 01:02:51.561723 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:51.561952 kubelet[2579]: W0304 01:02:51.561887 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:51.562108 kubelet[2579]: E0304 01:02:51.561955 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:51.563875 kubelet[2579]: E0304 01:02:51.563748 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:51.563875 kubelet[2579]: W0304 01:02:51.563816 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:51.563875 kubelet[2579]: E0304 01:02:51.563845 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:51.564763 kubelet[2579]: E0304 01:02:51.564700 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:51.564763 kubelet[2579]: W0304 01:02:51.564749 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:51.564763 kubelet[2579]: E0304 01:02:51.564764 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:51.570540 kubelet[2579]: E0304 01:02:51.570472 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:51.570540 kubelet[2579]: W0304 01:02:51.570531 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:51.570728 kubelet[2579]: E0304 01:02:51.570560 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:51.572508 kubelet[2579]: E0304 01:02:51.572456 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:51.572508 kubelet[2579]: W0304 01:02:51.572501 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:51.572508 kubelet[2579]: E0304 01:02:51.572515 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:51.574641 kubelet[2579]: E0304 01:02:51.574538 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:51.574641 kubelet[2579]: W0304 01:02:51.574620 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:51.574641 kubelet[2579]: E0304 01:02:51.574637 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:51.576486 kubelet[2579]: E0304 01:02:51.575213 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:51.576486 kubelet[2579]: W0304 01:02:51.575229 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:51.576486 kubelet[2579]: E0304 01:02:51.575240 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:51.576486 kubelet[2579]: E0304 01:02:51.575694 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:51.576486 kubelet[2579]: W0304 01:02:51.575708 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:51.576486 kubelet[2579]: E0304 01:02:51.575719 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:51.578414 kubelet[2579]: E0304 01:02:51.577469 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:51.578414 kubelet[2579]: W0304 01:02:51.577487 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:51.578414 kubelet[2579]: E0304 01:02:51.577499 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:51.578414 kubelet[2579]: E0304 01:02:51.577887 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:51.578414 kubelet[2579]: W0304 01:02:51.577897 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:51.578414 kubelet[2579]: E0304 01:02:51.577907 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:51.585471 kubelet[2579]: E0304 01:02:51.580528 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:51.585471 kubelet[2579]: W0304 01:02:51.580617 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:51.585471 kubelet[2579]: E0304 01:02:51.580632 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:51.617960 kubelet[2579]: E0304 01:02:51.617866 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:51.618172 kubelet[2579]: W0304 01:02:51.618017 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:51.618172 kubelet[2579]: E0304 01:02:51.618132 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:51.674125 containerd[1481]: time="2026-03-04T01:02:51.673244981Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:02:51.674125 containerd[1481]: time="2026-03-04T01:02:51.673327126Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:02:51.674125 containerd[1481]: time="2026-03-04T01:02:51.673342625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:02:51.675955 containerd[1481]: time="2026-03-04T01:02:51.674568189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-44r5c,Uid:8adce66e-6575-463f-97f2-a0dec48c7611,Namespace:calico-system,Attempt:0,}" Mar 4 01:02:51.675955 containerd[1481]: time="2026-03-04T01:02:51.674205212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:02:51.750131 systemd[1]: Started cri-containerd-bb7697be7d69dd9b72959bdfcaa3cea53e25fd4fc46dbc4e026cebb6b8394ca3.scope - libcontainer container bb7697be7d69dd9b72959bdfcaa3cea53e25fd4fc46dbc4e026cebb6b8394ca3. Mar 4 01:02:51.766430 containerd[1481]: time="2026-03-04T01:02:51.763814287Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:02:51.766430 containerd[1481]: time="2026-03-04T01:02:51.763984196Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:02:51.766430 containerd[1481]: time="2026-03-04T01:02:51.764011808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:02:51.768537 containerd[1481]: time="2026-03-04T01:02:51.768328345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:02:51.852145 systemd[1]: Started cri-containerd-d1036741b0920687876496de76d43050789fe2d7f825cc8764dcedd3e6f998ae.scope - libcontainer container d1036741b0920687876496de76d43050789fe2d7f825cc8764dcedd3e6f998ae. Mar 4 01:02:51.901311 containerd[1481]: time="2026-03-04T01:02:51.901008432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5684f4c75c-mw9kq,Uid:8304a1b6-8f25-47ba-aa81-07dfbfbd6bed,Namespace:calico-system,Attempt:0,} returns sandbox id \"bb7697be7d69dd9b72959bdfcaa3cea53e25fd4fc46dbc4e026cebb6b8394ca3\"" Mar 4 01:02:51.925083 kubelet[2579]: E0304 01:02:51.924517 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:51.927563 containerd[1481]: time="2026-03-04T01:02:51.927519222Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 4 01:02:51.947070 containerd[1481]: time="2026-03-04T01:02:51.946943095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-44r5c,Uid:8adce66e-6575-463f-97f2-a0dec48c7611,Namespace:calico-system,Attempt:0,} returns sandbox id \"d1036741b0920687876496de76d43050789fe2d7f825cc8764dcedd3e6f998ae\"" Mar 4 01:02:52.671903 kubelet[2579]: E0304 01:02:52.671235 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rs6f4" podUID="24812854-f0ac-4651-986c-4d61a0df5440" Mar 4 01:02:53.242817 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3196697895.mount: Deactivated successfully. Mar 4 01:02:54.682296 kubelet[2579]: E0304 01:02:54.682067 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rs6f4" podUID="24812854-f0ac-4651-986c-4d61a0df5440" Mar 4 01:02:55.704122 containerd[1481]: time="2026-03-04T01:02:55.701627285Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:02:55.715254 containerd[1481]: time="2026-03-04T01:02:55.715087855Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Mar 4 01:02:55.722634 containerd[1481]: time="2026-03-04T01:02:55.722536311Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:02:55.737055 containerd[1481]: time="2026-03-04T01:02:55.736893647Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:02:55.742747 containerd[1481]: time="2026-03-04T01:02:55.742513299Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 3.814932642s" Mar 4 01:02:55.742747 containerd[1481]: time="2026-03-04T01:02:55.742640468Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 4 01:02:55.749843 containerd[1481]: time="2026-03-04T01:02:55.748456249Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 4 01:02:55.793101 containerd[1481]: time="2026-03-04T01:02:55.792767206Z" level=info msg="CreateContainer within sandbox \"bb7697be7d69dd9b72959bdfcaa3cea53e25fd4fc46dbc4e026cebb6b8394ca3\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 4 01:02:55.853051 containerd[1481]: time="2026-03-04T01:02:55.852858211Z" level=info msg="CreateContainer within sandbox \"bb7697be7d69dd9b72959bdfcaa3cea53e25fd4fc46dbc4e026cebb6b8394ca3\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"00899dd803f9ef0d9e2f941a221b2e81874a7369c8a641ed77ea03befdb3674c\"" Mar 4 01:02:55.856946 containerd[1481]: time="2026-03-04T01:02:55.855238983Z" level=info msg="StartContainer for \"00899dd803f9ef0d9e2f941a221b2e81874a7369c8a641ed77ea03befdb3674c\"" Mar 4 01:02:56.031939 systemd[1]: Started cri-containerd-00899dd803f9ef0d9e2f941a221b2e81874a7369c8a641ed77ea03befdb3674c.scope - libcontainer container 00899dd803f9ef0d9e2f941a221b2e81874a7369c8a641ed77ea03befdb3674c. Mar 4 01:02:56.243450 containerd[1481]: time="2026-03-04T01:02:56.242175804Z" level=info msg="StartContainer for \"00899dd803f9ef0d9e2f941a221b2e81874a7369c8a641ed77ea03befdb3674c\" returns successfully" Mar 4 01:02:56.299336 kubelet[2579]: E0304 01:02:56.298316 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:56.331318 kubelet[2579]: E0304 01:02:56.326160 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:56.331318 kubelet[2579]: W0304 01:02:56.326202 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:56.331318 kubelet[2579]: E0304 01:02:56.326236 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:56.331318 kubelet[2579]: E0304 01:02:56.328412 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:56.331318 kubelet[2579]: W0304 01:02:56.328434 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:56.331318 kubelet[2579]: E0304 01:02:56.328460 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:56.331318 kubelet[2579]: E0304 01:02:56.329339 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:56.331318 kubelet[2579]: W0304 01:02:56.329473 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:56.331318 kubelet[2579]: E0304 01:02:56.329570 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:56.335574 kubelet[2579]: E0304 01:02:56.333763 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:56.335574 kubelet[2579]: W0304 01:02:56.333878 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:56.335574 kubelet[2579]: E0304 01:02:56.333913 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:56.335574 kubelet[2579]: E0304 01:02:56.334617 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:56.335574 kubelet[2579]: W0304 01:02:56.334628 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:56.335574 kubelet[2579]: E0304 01:02:56.334641 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:56.335574 kubelet[2579]: E0304 01:02:56.335042 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:56.335574 kubelet[2579]: W0304 01:02:56.335058 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:56.335574 kubelet[2579]: E0304 01:02:56.335074 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:56.338921 kubelet[2579]: E0304 01:02:56.337181 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:56.338921 kubelet[2579]: W0304 01:02:56.337201 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:56.338921 kubelet[2579]: E0304 01:02:56.337416 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:56.339832 kubelet[2579]: E0304 01:02:56.339742 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:56.339832 kubelet[2579]: W0304 01:02:56.339809 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:56.339832 kubelet[2579]: E0304 01:02:56.339830 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:56.340747 kubelet[2579]: E0304 01:02:56.340596 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:56.340747 kubelet[2579]: W0304 01:02:56.340660 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:56.340747 kubelet[2579]: E0304 01:02:56.340728 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:56.343413 kubelet[2579]: E0304 01:02:56.343275 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:56.343875 kubelet[2579]: W0304 01:02:56.343604 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:56.343875 kubelet[2579]: E0304 01:02:56.343776 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:56.347649 kubelet[2579]: E0304 01:02:56.347236 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:56.347649 kubelet[2579]: W0304 01:02:56.347267 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:56.347649 kubelet[2579]: E0304 01:02:56.347299 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:56.349431 kubelet[2579]: E0304 01:02:56.349325 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:56.349543 kubelet[2579]: W0304 01:02:56.349519 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:56.349639 kubelet[2579]: E0304 01:02:56.349619 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:56.353634 kubelet[2579]: E0304 01:02:56.353322 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:56.355826 kubelet[2579]: W0304 01:02:56.354329 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:56.355826 kubelet[2579]: E0304 01:02:56.354497 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:56.359630 kubelet[2579]: E0304 01:02:56.359591 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:56.360153 kubelet[2579]: W0304 01:02:56.359851 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:56.360153 kubelet[2579]: E0304 01:02:56.359898 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:56.371014 kubelet[2579]: I0304 01:02:56.365314 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5684f4c75c-mw9kq" podStartSLOduration=2.546320508 podStartE2EDuration="6.365291227s" podCreationTimestamp="2026-03-04 01:02:50 +0000 UTC" firstStartedPulling="2026-03-04 01:02:51.926791811 +0000 UTC m=+37.898853603" lastFinishedPulling="2026-03-04 01:02:55.74576253 +0000 UTC m=+41.717824322" observedRunningTime="2026-03-04 01:02:56.355927051 +0000 UTC m=+42.327988853" watchObservedRunningTime="2026-03-04 01:02:56.365291227 +0000 UTC m=+42.337353049" Mar 4 01:02:56.375129 kubelet[2579]: E0304 01:02:56.372042 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:56.375129 kubelet[2579]: W0304 01:02:56.372082 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:56.375129 kubelet[2579]: E0304 01:02:56.372118 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:56.375129 kubelet[2579]: E0304 01:02:56.373986 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:56.375129 kubelet[2579]: W0304 01:02:56.374012 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:56.375129 kubelet[2579]: E0304 01:02:56.374296 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:56.375589 kubelet[2579]: E0304 01:02:56.375480 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:56.375589 kubelet[2579]: W0304 01:02:56.375498 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:56.375589 kubelet[2579]: E0304 01:02:56.375520 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:56.376652 kubelet[2579]: E0304 01:02:56.376483 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:56.376652 kubelet[2579]: W0304 01:02:56.376589 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:56.376652 kubelet[2579]: E0304 01:02:56.376614 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:56.377303 kubelet[2579]: E0304 01:02:56.377280 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:56.377303 kubelet[2579]: W0304 01:02:56.377300 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:56.377628 kubelet[2579]: E0304 01:02:56.377320 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:56.382269 kubelet[2579]: E0304 01:02:56.382100 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:56.382269 kubelet[2579]: W0304 01:02:56.382173 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:56.382269 kubelet[2579]: E0304 01:02:56.382206 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:56.383140 kubelet[2579]: E0304 01:02:56.383115 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:56.383140 kubelet[2579]: W0304 01:02:56.383138 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:56.384219 kubelet[2579]: E0304 01:02:56.383162 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:56.384918 kubelet[2579]: E0304 01:02:56.384869 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:56.384981 kubelet[2579]: W0304 01:02:56.384939 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:56.384981 kubelet[2579]: E0304 01:02:56.384969 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:56.386195 kubelet[2579]: E0304 01:02:56.386064 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:56.386195 kubelet[2579]: W0304 01:02:56.386083 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:56.386195 kubelet[2579]: E0304 01:02:56.386106 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:56.391742 kubelet[2579]: E0304 01:02:56.391611 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:56.391742 kubelet[2579]: W0304 01:02:56.391735 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:56.392119 kubelet[2579]: E0304 01:02:56.392029 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:56.393266 kubelet[2579]: E0304 01:02:56.393215 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:56.393335 kubelet[2579]: W0304 01:02:56.393267 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:56.393335 kubelet[2579]: E0304 01:02:56.393291 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:56.394133 kubelet[2579]: E0304 01:02:56.394073 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:56.394133 kubelet[2579]: W0304 01:02:56.394094 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:56.394133 kubelet[2579]: E0304 01:02:56.394114 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:56.396183 kubelet[2579]: E0304 01:02:56.395853 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:56.396183 kubelet[2579]: W0304 01:02:56.395874 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:56.396183 kubelet[2579]: E0304 01:02:56.395894 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:56.396530 kubelet[2579]: E0304 01:02:56.396436 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:56.396530 kubelet[2579]: W0304 01:02:56.396449 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:56.396530 kubelet[2579]: E0304 01:02:56.396466 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:56.398163 kubelet[2579]: E0304 01:02:56.397438 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:56.398163 kubelet[2579]: W0304 01:02:56.397456 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:56.398163 kubelet[2579]: E0304 01:02:56.397471 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:56.399609 kubelet[2579]: E0304 01:02:56.398585 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:56.399609 kubelet[2579]: W0304 01:02:56.398604 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:56.399609 kubelet[2579]: E0304 01:02:56.398624 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:56.404811 kubelet[2579]: E0304 01:02:56.403155 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:56.404811 kubelet[2579]: W0304 01:02:56.403322 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:56.404811 kubelet[2579]: E0304 01:02:56.403624 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:56.410178 kubelet[2579]: E0304 01:02:56.409225 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:56.410178 kubelet[2579]: W0304 01:02:56.409258 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:56.410178 kubelet[2579]: E0304 01:02:56.409287 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:56.410663 kubelet[2579]: E0304 01:02:56.410242 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:56.410663 kubelet[2579]: W0304 01:02:56.410258 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:56.410663 kubelet[2579]: E0304 01:02:56.410277 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:56.623136 kubelet[2579]: E0304 01:02:56.622576 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rs6f4" podUID="24812854-f0ac-4651-986c-4d61a0df5440" Mar 4 01:02:57.068870 containerd[1481]: time="2026-03-04T01:02:57.065644626Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:02:57.074215 containerd[1481]: time="2026-03-04T01:02:57.073530540Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Mar 4 01:02:57.084163 containerd[1481]: time="2026-03-04T01:02:57.084095134Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:02:57.098196 containerd[1481]: time="2026-03-04T01:02:57.097976533Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:02:57.099073 containerd[1481]: time="2026-03-04T01:02:57.098909792Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.350363734s" Mar 4 01:02:57.099073 containerd[1481]: time="2026-03-04T01:02:57.098994000Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 4 01:02:57.124866 containerd[1481]: time="2026-03-04T01:02:57.124229335Z" level=info msg="CreateContainer within sandbox \"d1036741b0920687876496de76d43050789fe2d7f825cc8764dcedd3e6f998ae\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 4 01:02:57.194986 containerd[1481]: time="2026-03-04T01:02:57.194830423Z" level=info msg="CreateContainer within sandbox \"d1036741b0920687876496de76d43050789fe2d7f825cc8764dcedd3e6f998ae\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2e5d93ffe34e3af899436f1997d62efb298b4d85552b500aa57ac8b331aad07f\"" Mar 4 01:02:57.196180 containerd[1481]: time="2026-03-04T01:02:57.196056766Z" level=info msg="StartContainer for \"2e5d93ffe34e3af899436f1997d62efb298b4d85552b500aa57ac8b331aad07f\"" Mar 4 01:02:57.276839 systemd[1]: Started cri-containerd-2e5d93ffe34e3af899436f1997d62efb298b4d85552b500aa57ac8b331aad07f.scope - libcontainer container 2e5d93ffe34e3af899436f1997d62efb298b4d85552b500aa57ac8b331aad07f. Mar 4 01:02:57.301506 kubelet[2579]: E0304 01:02:57.301302 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:57.385040 kubelet[2579]: E0304 01:02:57.384839 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:57.385040 kubelet[2579]: W0304 01:02:57.384910 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:57.385295 kubelet[2579]: E0304 01:02:57.385040 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:57.387073 kubelet[2579]: E0304 01:02:57.386067 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:57.387073 kubelet[2579]: W0304 01:02:57.386086 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:57.387073 kubelet[2579]: E0304 01:02:57.386110 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:57.389267 kubelet[2579]: E0304 01:02:57.388200 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:57.389267 kubelet[2579]: W0304 01:02:57.388611 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:57.389267 kubelet[2579]: E0304 01:02:57.388639 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:57.391151 kubelet[2579]: E0304 01:02:57.391079 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:57.391151 kubelet[2579]: W0304 01:02:57.391141 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:57.391273 kubelet[2579]: E0304 01:02:57.391167 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:57.392154 kubelet[2579]: E0304 01:02:57.392053 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:57.392154 kubelet[2579]: W0304 01:02:57.392109 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:57.392154 kubelet[2579]: E0304 01:02:57.392132 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:57.393941 kubelet[2579]: E0304 01:02:57.393842 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:57.393941 kubelet[2579]: W0304 01:02:57.393909 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:57.393941 kubelet[2579]: E0304 01:02:57.393933 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:57.395250 kubelet[2579]: E0304 01:02:57.395089 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:57.395250 kubelet[2579]: W0304 01:02:57.395149 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:57.395250 kubelet[2579]: E0304 01:02:57.395173 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:57.397680 kubelet[2579]: E0304 01:02:57.397651 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:57.397680 kubelet[2579]: W0304 01:02:57.397672 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:57.397847 kubelet[2579]: E0304 01:02:57.397745 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:57.401034 kubelet[2579]: E0304 01:02:57.400970 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:57.401034 kubelet[2579]: W0304 01:02:57.401026 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:57.401164 kubelet[2579]: E0304 01:02:57.401051 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:57.403794 kubelet[2579]: E0304 01:02:57.403678 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:57.403794 kubelet[2579]: W0304 01:02:57.403787 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:57.403915 kubelet[2579]: E0304 01:02:57.403811 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:57.404546 kubelet[2579]: E0304 01:02:57.404452 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:57.404546 kubelet[2579]: W0304 01:02:57.404493 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:57.404546 kubelet[2579]: E0304 01:02:57.404510 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:57.405000 kubelet[2579]: E0304 01:02:57.404846 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:57.405000 kubelet[2579]: W0304 01:02:57.404889 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:57.405000 kubelet[2579]: E0304 01:02:57.404903 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:57.405736 containerd[1481]: time="2026-03-04T01:02:57.405343529Z" level=info msg="StartContainer for \"2e5d93ffe34e3af899436f1997d62efb298b4d85552b500aa57ac8b331aad07f\" returns successfully" Mar 4 01:02:57.405995 kubelet[2579]: E0304 01:02:57.405839 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:57.405995 kubelet[2579]: W0304 01:02:57.405858 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:57.405995 kubelet[2579]: E0304 01:02:57.405876 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:57.407948 kubelet[2579]: E0304 01:02:57.407441 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:57.407948 kubelet[2579]: W0304 01:02:57.407459 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:57.407948 kubelet[2579]: E0304 01:02:57.407473 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:57.408225 kubelet[2579]: E0304 01:02:57.408076 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:57.408225 kubelet[2579]: W0304 01:02:57.408095 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:57.408225 kubelet[2579]: E0304 01:02:57.408111 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:57.411817 kubelet[2579]: E0304 01:02:57.411223 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:57.411817 kubelet[2579]: W0304 01:02:57.411245 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:57.411817 kubelet[2579]: E0304 01:02:57.411266 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:57.412239 kubelet[2579]: E0304 01:02:57.412019 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:57.412239 kubelet[2579]: W0304 01:02:57.412048 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:57.412239 kubelet[2579]: E0304 01:02:57.412070 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:57.414614 kubelet[2579]: E0304 01:02:57.414193 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:57.414614 kubelet[2579]: W0304 01:02:57.414214 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:57.414614 kubelet[2579]: E0304 01:02:57.414232 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:57.417079 kubelet[2579]: E0304 01:02:57.416966 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:57.417079 kubelet[2579]: W0304 01:02:57.416986 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:57.417079 kubelet[2579]: E0304 01:02:57.417006 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:57.417823 kubelet[2579]: E0304 01:02:57.417633 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:57.417823 kubelet[2579]: W0304 01:02:57.417651 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:57.417823 kubelet[2579]: E0304 01:02:57.417667 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:57.418519 kubelet[2579]: E0304 01:02:57.418267 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:57.418519 kubelet[2579]: W0304 01:02:57.418287 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:57.418519 kubelet[2579]: E0304 01:02:57.418303 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:57.419078 kubelet[2579]: E0304 01:02:57.418926 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:57.419078 kubelet[2579]: W0304 01:02:57.418941 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:57.419078 kubelet[2579]: E0304 01:02:57.418957 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:57.419956 kubelet[2579]: E0304 01:02:57.419341 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:57.419956 kubelet[2579]: W0304 01:02:57.419447 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:57.419956 kubelet[2579]: E0304 01:02:57.419463 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:57.419956 kubelet[2579]: E0304 01:02:57.419951 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:57.420684 kubelet[2579]: W0304 01:02:57.419964 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:57.420684 kubelet[2579]: E0304 01:02:57.419979 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:57.420684 kubelet[2579]: E0304 01:02:57.420556 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:57.420684 kubelet[2579]: W0304 01:02:57.420570 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:57.420684 kubelet[2579]: E0304 01:02:57.420584 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:57.421644 kubelet[2579]: E0304 01:02:57.421084 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:57.421644 kubelet[2579]: W0304 01:02:57.421103 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:57.421644 kubelet[2579]: E0304 01:02:57.421119 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:57.422625 kubelet[2579]: E0304 01:02:57.422300 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:57.422625 kubelet[2579]: W0304 01:02:57.422314 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:57.422625 kubelet[2579]: E0304 01:02:57.422328 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:57.425046 kubelet[2579]: E0304 01:02:57.424853 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:57.425125 kubelet[2579]: W0304 01:02:57.425094 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:57.425125 kubelet[2579]: E0304 01:02:57.425116 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:57.425823 kubelet[2579]: E0304 01:02:57.425751 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:57.425823 kubelet[2579]: W0304 01:02:57.425806 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:57.425823 kubelet[2579]: E0304 01:02:57.425824 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:57.427247 kubelet[2579]: E0304 01:02:57.427041 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:57.427324 kubelet[2579]: W0304 01:02:57.427308 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:57.427449 kubelet[2579]: E0304 01:02:57.427328 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:57.428755 kubelet[2579]: E0304 01:02:57.428535 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:57.428935 kubelet[2579]: W0304 01:02:57.428855 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:57.429055 kubelet[2579]: E0304 01:02:57.429010 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:57.429918 kubelet[2579]: E0304 01:02:57.429845 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:57.429918 kubelet[2579]: W0304 01:02:57.429895 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:57.429918 kubelet[2579]: E0304 01:02:57.429915 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:57.431456 kubelet[2579]: E0304 01:02:57.430898 2579 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:02:57.431456 kubelet[2579]: W0304 01:02:57.430915 2579 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:02:57.431456 kubelet[2579]: E0304 01:02:57.430931 2579 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:02:57.434300 systemd[1]: cri-containerd-2e5d93ffe34e3af899436f1997d62efb298b4d85552b500aa57ac8b331aad07f.scope: Deactivated successfully. Mar 4 01:02:57.521217 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e5d93ffe34e3af899436f1997d62efb298b4d85552b500aa57ac8b331aad07f-rootfs.mount: Deactivated successfully. Mar 4 01:02:57.641066 containerd[1481]: time="2026-03-04T01:02:57.638089093Z" level=info msg="shim disconnected" id=2e5d93ffe34e3af899436f1997d62efb298b4d85552b500aa57ac8b331aad07f namespace=k8s.io Mar 4 01:02:57.641066 containerd[1481]: time="2026-03-04T01:02:57.638342769Z" level=warning msg="cleaning up after shim disconnected" id=2e5d93ffe34e3af899436f1997d62efb298b4d85552b500aa57ac8b331aad07f namespace=k8s.io Mar 4 01:02:57.641066 containerd[1481]: time="2026-03-04T01:02:57.638449139Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 01:02:57.743252 containerd[1481]: time="2026-03-04T01:02:57.743102785Z" level=warning msg="cleanup warnings time=\"2026-03-04T01:02:57Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 4 01:02:58.313023 kubelet[2579]: E0304 01:02:58.312629 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:02:58.329284 containerd[1481]: time="2026-03-04T01:02:58.329212277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 4 01:02:58.623580 kubelet[2579]: E0304 01:02:58.621328 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rs6f4" podUID="24812854-f0ac-4651-986c-4d61a0df5440" Mar 4 01:03:06.946658 kubelet[2579]: E0304 01:03:06.935747 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rs6f4" podUID="24812854-f0ac-4651-986c-4d61a0df5440" Mar 4 01:03:08.624102 kubelet[2579]: E0304 01:03:08.623821 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rs6f4" podUID="24812854-f0ac-4651-986c-4d61a0df5440" Mar 4 01:03:10.623242 kubelet[2579]: E0304 01:03:10.622538 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rs6f4" podUID="24812854-f0ac-4651-986c-4d61a0df5440" Mar 4 01:03:12.630564 kubelet[2579]: E0304 01:03:12.629042 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rs6f4" podUID="24812854-f0ac-4651-986c-4d61a0df5440" Mar 4 01:03:14.626436 kubelet[2579]: E0304 01:03:14.626287 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rs6f4" podUID="24812854-f0ac-4651-986c-4d61a0df5440" Mar 4 01:03:16.624519 kubelet[2579]: E0304 01:03:16.623703 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rs6f4" podUID="24812854-f0ac-4651-986c-4d61a0df5440" Mar 4 01:03:18.622483 kubelet[2579]: E0304 01:03:18.621559 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rs6f4" podUID="24812854-f0ac-4651-986c-4d61a0df5440" Mar 4 01:03:20.628925 kubelet[2579]: E0304 01:03:20.628658 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rs6f4" podUID="24812854-f0ac-4651-986c-4d61a0df5440" Mar 4 01:03:22.656231 kubelet[2579]: E0304 01:03:22.652901 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rs6f4" podUID="24812854-f0ac-4651-986c-4d61a0df5440" Mar 4 01:03:24.628123 kubelet[2579]: E0304 01:03:24.626234 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rs6f4" podUID="24812854-f0ac-4651-986c-4d61a0df5440" Mar 4 01:03:26.113661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3944909235.mount: Deactivated successfully. Mar 4 01:03:26.412306 containerd[1481]: time="2026-03-04T01:03:26.411464435Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:03:26.418110 containerd[1481]: time="2026-03-04T01:03:26.415874371Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 4 01:03:26.418993 containerd[1481]: time="2026-03-04T01:03:26.418897318Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:03:26.429448 containerd[1481]: time="2026-03-04T01:03:26.429243645Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:03:26.432293 containerd[1481]: time="2026-03-04T01:03:26.431173934Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 28.101898268s" Mar 4 01:03:26.432293 containerd[1481]: time="2026-03-04T01:03:26.431312714Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 4 01:03:26.443978 containerd[1481]: time="2026-03-04T01:03:26.443893124Z" level=info msg="CreateContainer within sandbox \"d1036741b0920687876496de76d43050789fe2d7f825cc8764dcedd3e6f998ae\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 4 01:03:26.549070 containerd[1481]: time="2026-03-04T01:03:26.547951451Z" level=info msg="CreateContainer within sandbox \"d1036741b0920687876496de76d43050789fe2d7f825cc8764dcedd3e6f998ae\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"3d65f9011233f1803d4c2aec3846a474551865c2850876695937c307b232a898\"" Mar 4 01:03:26.555977 containerd[1481]: time="2026-03-04T01:03:26.555182467Z" level=info msg="StartContainer for \"3d65f9011233f1803d4c2aec3846a474551865c2850876695937c307b232a898\"" Mar 4 01:03:26.626446 kubelet[2579]: E0304 01:03:26.626092 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rs6f4" podUID="24812854-f0ac-4651-986c-4d61a0df5440" Mar 4 01:03:26.959269 systemd[1]: Started cri-containerd-3d65f9011233f1803d4c2aec3846a474551865c2850876695937c307b232a898.scope - libcontainer container 3d65f9011233f1803d4c2aec3846a474551865c2850876695937c307b232a898. Mar 4 01:03:27.084334 containerd[1481]: time="2026-03-04T01:03:27.084121935Z" level=info msg="StartContainer for \"3d65f9011233f1803d4c2aec3846a474551865c2850876695937c307b232a898\" returns successfully" Mar 4 01:03:27.417129 systemd[1]: cri-containerd-3d65f9011233f1803d4c2aec3846a474551865c2850876695937c307b232a898.scope: Deactivated successfully. Mar 4 01:03:27.491821 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3d65f9011233f1803d4c2aec3846a474551865c2850876695937c307b232a898-rootfs.mount: Deactivated successfully. Mar 4 01:03:27.629166 containerd[1481]: time="2026-03-04T01:03:27.628515526Z" level=info msg="shim disconnected" id=3d65f9011233f1803d4c2aec3846a474551865c2850876695937c307b232a898 namespace=k8s.io Mar 4 01:03:27.629166 containerd[1481]: time="2026-03-04T01:03:27.628977843Z" level=warning msg="cleaning up after shim disconnected" id=3d65f9011233f1803d4c2aec3846a474551865c2850876695937c307b232a898 namespace=k8s.io Mar 4 01:03:27.629166 containerd[1481]: time="2026-03-04T01:03:27.628994133Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 01:03:28.379594 containerd[1481]: time="2026-03-04T01:03:28.376873013Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 4 01:03:28.633671 kubelet[2579]: E0304 01:03:28.632786 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rs6f4" podUID="24812854-f0ac-4651-986c-4d61a0df5440" Mar 4 01:03:30.633466 kubelet[2579]: E0304 01:03:30.631543 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rs6f4" podUID="24812854-f0ac-4651-986c-4d61a0df5440" Mar 4 01:03:32.623227 kubelet[2579]: E0304 01:03:32.622812 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rs6f4" podUID="24812854-f0ac-4651-986c-4d61a0df5440" Mar 4 01:03:34.637672 kubelet[2579]: E0304 01:03:34.631908 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rs6f4" podUID="24812854-f0ac-4651-986c-4d61a0df5440" Mar 4 01:03:34.637672 kubelet[2579]: E0304 01:03:34.636032 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:36.277495 containerd[1481]: time="2026-03-04T01:03:36.277311694Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:03:36.284531 containerd[1481]: time="2026-03-04T01:03:36.283827673Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 4 01:03:36.286697 containerd[1481]: time="2026-03-04T01:03:36.286542975Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:03:36.298000 containerd[1481]: time="2026-03-04T01:03:36.297270916Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:03:36.307797 containerd[1481]: time="2026-03-04T01:03:36.307235278Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 7.930246849s" Mar 4 01:03:36.307797 containerd[1481]: time="2026-03-04T01:03:36.307336508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 4 01:03:36.333145 containerd[1481]: time="2026-03-04T01:03:36.332138713Z" level=info msg="CreateContainer within sandbox \"d1036741b0920687876496de76d43050789fe2d7f825cc8764dcedd3e6f998ae\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 4 01:03:36.439987 containerd[1481]: time="2026-03-04T01:03:36.436544175Z" level=info msg="CreateContainer within sandbox \"d1036741b0920687876496de76d43050789fe2d7f825cc8764dcedd3e6f998ae\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ff8cad60091e72e6fd8704c4acd4c1af8b0d5943e29b111d323598b8fa1cdca1\"" Mar 4 01:03:36.444061 containerd[1481]: time="2026-03-04T01:03:36.443869146Z" level=info msg="StartContainer for \"ff8cad60091e72e6fd8704c4acd4c1af8b0d5943e29b111d323598b8fa1cdca1\"" Mar 4 01:03:36.563717 systemd[1]: Started cri-containerd-ff8cad60091e72e6fd8704c4acd4c1af8b0d5943e29b111d323598b8fa1cdca1.scope - libcontainer container ff8cad60091e72e6fd8704c4acd4c1af8b0d5943e29b111d323598b8fa1cdca1. Mar 4 01:03:36.629799 kubelet[2579]: E0304 01:03:36.624560 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rs6f4" podUID="24812854-f0ac-4651-986c-4d61a0df5440" Mar 4 01:03:36.786100 containerd[1481]: time="2026-03-04T01:03:36.785249838Z" level=info msg="StartContainer for \"ff8cad60091e72e6fd8704c4acd4c1af8b0d5943e29b111d323598b8fa1cdca1\" returns successfully" Mar 4 01:03:38.536771 systemd[1]: cri-containerd-ff8cad60091e72e6fd8704c4acd4c1af8b0d5943e29b111d323598b8fa1cdca1.scope: Deactivated successfully. Mar 4 01:03:38.537272 systemd[1]: cri-containerd-ff8cad60091e72e6fd8704c4acd4c1af8b0d5943e29b111d323598b8fa1cdca1.scope: Consumed 1.878s CPU time. Mar 4 01:03:38.587559 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff8cad60091e72e6fd8704c4acd4c1af8b0d5943e29b111d323598b8fa1cdca1-rootfs.mount: Deactivated successfully. Mar 4 01:03:38.611527 containerd[1481]: time="2026-03-04T01:03:38.611120767Z" level=info msg="shim disconnected" id=ff8cad60091e72e6fd8704c4acd4c1af8b0d5943e29b111d323598b8fa1cdca1 namespace=k8s.io Mar 4 01:03:38.611527 containerd[1481]: time="2026-03-04T01:03:38.611496962Z" level=warning msg="cleaning up after shim disconnected" id=ff8cad60091e72e6fd8704c4acd4c1af8b0d5943e29b111d323598b8fa1cdca1 namespace=k8s.io Mar 4 01:03:38.611527 containerd[1481]: time="2026-03-04T01:03:38.611514815Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 01:03:38.624095 kubelet[2579]: I0304 01:03:38.621994 2579 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 4 01:03:38.636810 systemd[1]: Created slice kubepods-besteffort-pod24812854_f0ac_4651_986c_4d61a0df5440.slice - libcontainer container kubepods-besteffort-pod24812854_f0ac_4651_986c_4d61a0df5440.slice. Mar 4 01:03:38.655558 containerd[1481]: time="2026-03-04T01:03:38.655222547Z" level=warning msg="cleanup warnings time=\"2026-03-04T01:03:38Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 4 01:03:38.658232 containerd[1481]: time="2026-03-04T01:03:38.657978571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rs6f4,Uid:24812854-f0ac-4651-986c-4d61a0df5440,Namespace:calico-system,Attempt:0,}" Mar 4 01:03:38.781686 systemd[1]: Created slice kubepods-besteffort-pod28010a71_1727_4efe_b343_de6e69fbd281.slice - libcontainer container kubepods-besteffort-pod28010a71_1727_4efe_b343_de6e69fbd281.slice. Mar 4 01:03:38.814789 systemd[1]: Created slice kubepods-besteffort-pod9c0b93c3_34c4_4c8f_bfc1_54f9448d999f.slice - libcontainer container kubepods-besteffort-pod9c0b93c3_34c4_4c8f_bfc1_54f9448d999f.slice. Mar 4 01:03:38.838290 kubelet[2579]: I0304 01:03:38.838240 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28010a71-1727-4efe-b343-de6e69fbd281-tigera-ca-bundle\") pod \"calico-kube-controllers-6dbf4f54f5-flgh7\" (UID: \"28010a71-1727-4efe-b343-de6e69fbd281\") " pod="calico-system/calico-kube-controllers-6dbf4f54f5-flgh7" Mar 4 01:03:38.839526 kubelet[2579]: I0304 01:03:38.838668 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ba3a3117-2ed7-420f-9281-01467babd9c7-config-volume\") pod \"coredns-66bc5c9577-rr9gl\" (UID: \"ba3a3117-2ed7-420f-9281-01467babd9c7\") " pod="kube-system/coredns-66bc5c9577-rr9gl" Mar 4 01:03:38.839526 kubelet[2579]: I0304 01:03:38.838710 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r9wr\" (UniqueName: \"kubernetes.io/projected/28010a71-1727-4efe-b343-de6e69fbd281-kube-api-access-7r9wr\") pod \"calico-kube-controllers-6dbf4f54f5-flgh7\" (UID: \"28010a71-1727-4efe-b343-de6e69fbd281\") " pod="calico-system/calico-kube-controllers-6dbf4f54f5-flgh7" Mar 4 01:03:38.839526 kubelet[2579]: I0304 01:03:38.838738 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvrv7\" (UniqueName: \"kubernetes.io/projected/0199c151-24ad-4cf2-ae91-b7e9b350322f-kube-api-access-bvrv7\") pod \"coredns-66bc5c9577-svjm8\" (UID: \"0199c151-24ad-4cf2-ae91-b7e9b350322f\") " pod="kube-system/coredns-66bc5c9577-svjm8" Mar 4 01:03:38.839526 kubelet[2579]: I0304 01:03:38.838767 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/ce96956a-db8a-428b-9f28-9b754235562e-nginx-config\") pod \"whisker-87f89c9c8-fnddc\" (UID: \"ce96956a-db8a-428b-9f28-9b754235562e\") " pod="calico-system/whisker-87f89c9c8-fnddc" Mar 4 01:03:38.839526 kubelet[2579]: I0304 01:03:38.838788 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ce96956a-db8a-428b-9f28-9b754235562e-whisker-backend-key-pair\") pod \"whisker-87f89c9c8-fnddc\" (UID: \"ce96956a-db8a-428b-9f28-9b754235562e\") " pod="calico-system/whisker-87f89c9c8-fnddc" Mar 4 01:03:38.839732 kubelet[2579]: I0304 01:03:38.838815 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9c0b93c3-34c4-4c8f-bfc1-54f9448d999f-calico-apiserver-certs\") pod \"calico-apiserver-66b64679fb-6fgtf\" (UID: \"9c0b93c3-34c4-4c8f-bfc1-54f9448d999f\") " pod="calico-system/calico-apiserver-66b64679fb-6fgtf" Mar 4 01:03:38.839732 kubelet[2579]: I0304 01:03:38.838839 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bdzh\" (UniqueName: \"kubernetes.io/projected/9c0b93c3-34c4-4c8f-bfc1-54f9448d999f-kube-api-access-5bdzh\") pod \"calico-apiserver-66b64679fb-6fgtf\" (UID: \"9c0b93c3-34c4-4c8f-bfc1-54f9448d999f\") " pod="calico-system/calico-apiserver-66b64679fb-6fgtf" Mar 4 01:03:38.839732 kubelet[2579]: I0304 01:03:38.838864 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0199c151-24ad-4cf2-ae91-b7e9b350322f-config-volume\") pod \"coredns-66bc5c9577-svjm8\" (UID: \"0199c151-24ad-4cf2-ae91-b7e9b350322f\") " pod="kube-system/coredns-66bc5c9577-svjm8" Mar 4 01:03:38.839732 kubelet[2579]: I0304 01:03:38.838952 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce96956a-db8a-428b-9f28-9b754235562e-whisker-ca-bundle\") pod \"whisker-87f89c9c8-fnddc\" (UID: \"ce96956a-db8a-428b-9f28-9b754235562e\") " pod="calico-system/whisker-87f89c9c8-fnddc" Mar 4 01:03:38.839732 kubelet[2579]: I0304 01:03:38.838974 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zgxb\" (UniqueName: \"kubernetes.io/projected/ce96956a-db8a-428b-9f28-9b754235562e-kube-api-access-7zgxb\") pod \"whisker-87f89c9c8-fnddc\" (UID: \"ce96956a-db8a-428b-9f28-9b754235562e\") " pod="calico-system/whisker-87f89c9c8-fnddc" Mar 4 01:03:38.839887 kubelet[2579]: I0304 01:03:38.839036 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdqgb\" (UniqueName: \"kubernetes.io/projected/883fc4b1-6269-44ab-9fb7-38da3bb836eb-kube-api-access-vdqgb\") pod \"goldmane-cccfbd5cf-whcgp\" (UID: \"883fc4b1-6269-44ab-9fb7-38da3bb836eb\") " pod="calico-system/goldmane-cccfbd5cf-whcgp" Mar 4 01:03:38.839887 kubelet[2579]: I0304 01:03:38.839069 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/883fc4b1-6269-44ab-9fb7-38da3bb836eb-config\") pod \"goldmane-cccfbd5cf-whcgp\" (UID: \"883fc4b1-6269-44ab-9fb7-38da3bb836eb\") " pod="calico-system/goldmane-cccfbd5cf-whcgp" Mar 4 01:03:38.839887 kubelet[2579]: I0304 01:03:38.839097 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/883fc4b1-6269-44ab-9fb7-38da3bb836eb-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-whcgp\" (UID: \"883fc4b1-6269-44ab-9fb7-38da3bb836eb\") " pod="calico-system/goldmane-cccfbd5cf-whcgp" Mar 4 01:03:38.839887 kubelet[2579]: I0304 01:03:38.839117 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/883fc4b1-6269-44ab-9fb7-38da3bb836eb-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-whcgp\" (UID: \"883fc4b1-6269-44ab-9fb7-38da3bb836eb\") " pod="calico-system/goldmane-cccfbd5cf-whcgp" Mar 4 01:03:38.839887 kubelet[2579]: I0304 01:03:38.839149 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/55e105c4-cb80-455a-abe4-3f8ab66ac4c8-calico-apiserver-certs\") pod \"calico-apiserver-66b64679fb-hzv4z\" (UID: \"55e105c4-cb80-455a-abe4-3f8ab66ac4c8\") " pod="calico-system/calico-apiserver-66b64679fb-hzv4z" Mar 4 01:03:38.840170 kubelet[2579]: I0304 01:03:38.839176 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt544\" (UniqueName: \"kubernetes.io/projected/55e105c4-cb80-455a-abe4-3f8ab66ac4c8-kube-api-access-lt544\") pod \"calico-apiserver-66b64679fb-hzv4z\" (UID: \"55e105c4-cb80-455a-abe4-3f8ab66ac4c8\") " pod="calico-system/calico-apiserver-66b64679fb-hzv4z" Mar 4 01:03:38.840170 kubelet[2579]: I0304 01:03:38.839206 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvgzr\" (UniqueName: \"kubernetes.io/projected/ba3a3117-2ed7-420f-9281-01467babd9c7-kube-api-access-gvgzr\") pod \"coredns-66bc5c9577-rr9gl\" (UID: \"ba3a3117-2ed7-420f-9281-01467babd9c7\") " pod="kube-system/coredns-66bc5c9577-rr9gl" Mar 4 01:03:38.856843 systemd[1]: Created slice kubepods-burstable-pod0199c151_24ad_4cf2_ae91_b7e9b350322f.slice - libcontainer container kubepods-burstable-pod0199c151_24ad_4cf2_ae91_b7e9b350322f.slice. Mar 4 01:03:38.878544 systemd[1]: Created slice kubepods-besteffort-podce96956a_db8a_428b_9f28_9b754235562e.slice - libcontainer container kubepods-besteffort-podce96956a_db8a_428b_9f28_9b754235562e.slice. Mar 4 01:03:38.907190 systemd[1]: Created slice kubepods-besteffort-pod883fc4b1_6269_44ab_9fb7_38da3bb836eb.slice - libcontainer container kubepods-besteffort-pod883fc4b1_6269_44ab_9fb7_38da3bb836eb.slice. Mar 4 01:03:38.919973 systemd[1]: Created slice kubepods-burstable-podba3a3117_2ed7_420f_9281_01467babd9c7.slice - libcontainer container kubepods-burstable-podba3a3117_2ed7_420f_9281_01467babd9c7.slice. Mar 4 01:03:38.930629 systemd[1]: Created slice kubepods-besteffort-pod55e105c4_cb80_455a_abe4_3f8ab66ac4c8.slice - libcontainer container kubepods-besteffort-pod55e105c4_cb80_455a_abe4_3f8ab66ac4c8.slice. Mar 4 01:03:39.113901 containerd[1481]: time="2026-03-04T01:03:39.113157435Z" level=error msg="Failed to destroy network for sandbox \"cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:03:39.118733 containerd[1481]: time="2026-03-04T01:03:39.118297537Z" level=error msg="encountered an error cleaning up failed sandbox \"cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:03:39.119517 containerd[1481]: time="2026-03-04T01:03:39.119105872Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rs6f4,Uid:24812854-f0ac-4651-986c-4d61a0df5440,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:03:39.122142 containerd[1481]: time="2026-03-04T01:03:39.122055145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dbf4f54f5-flgh7,Uid:28010a71-1727-4efe-b343-de6e69fbd281,Namespace:calico-system,Attempt:0,}" Mar 4 01:03:39.133790 kubelet[2579]: E0304 01:03:39.132918 2579 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:03:39.133790 kubelet[2579]: E0304 01:03:39.133096 2579 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rs6f4" Mar 4 01:03:39.133790 kubelet[2579]: E0304 01:03:39.133174 2579 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rs6f4" Mar 4 01:03:39.134128 kubelet[2579]: E0304 01:03:39.133313 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rs6f4_calico-system(24812854-f0ac-4651-986c-4d61a0df5440)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rs6f4_calico-system(24812854-f0ac-4651-986c-4d61a0df5440)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rs6f4" podUID="24812854-f0ac-4651-986c-4d61a0df5440" Mar 4 01:03:39.154193 containerd[1481]: time="2026-03-04T01:03:39.154079326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66b64679fb-6fgtf,Uid:9c0b93c3-34c4-4c8f-bfc1-54f9448d999f,Namespace:calico-system,Attempt:0,}" Mar 4 01:03:39.178508 kubelet[2579]: E0304 01:03:39.176791 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:39.180340 containerd[1481]: time="2026-03-04T01:03:39.179853155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-svjm8,Uid:0199c151-24ad-4cf2-ae91-b7e9b350322f,Namespace:kube-system,Attempt:0,}" Mar 4 01:03:39.202252 containerd[1481]: time="2026-03-04T01:03:39.202032411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-87f89c9c8-fnddc,Uid:ce96956a-db8a-428b-9f28-9b754235562e,Namespace:calico-system,Attempt:0,}" Mar 4 01:03:39.247080 containerd[1481]: time="2026-03-04T01:03:39.246975494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-whcgp,Uid:883fc4b1-6269-44ab-9fb7-38da3bb836eb,Namespace:calico-system,Attempt:0,}" Mar 4 01:03:39.251473 kubelet[2579]: E0304 01:03:39.249906 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:39.252190 containerd[1481]: time="2026-03-04T01:03:39.252090829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rr9gl,Uid:ba3a3117-2ed7-420f-9281-01467babd9c7,Namespace:kube-system,Attempt:0,}" Mar 4 01:03:39.255170 containerd[1481]: time="2026-03-04T01:03:39.255090458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66b64679fb-hzv4z,Uid:55e105c4-cb80-455a-abe4-3f8ab66ac4c8,Namespace:calico-system,Attempt:0,}" Mar 4 01:03:39.535171 containerd[1481]: time="2026-03-04T01:03:39.535004524Z" level=error msg="Failed to destroy network for sandbox \"bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:03:39.536743 containerd[1481]: time="2026-03-04T01:03:39.536616443Z" level=error msg="encountered an error cleaning up failed sandbox \"bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:03:39.537077 containerd[1481]: time="2026-03-04T01:03:39.536844350Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dbf4f54f5-flgh7,Uid:28010a71-1727-4efe-b343-de6e69fbd281,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:03:39.537507 kubelet[2579]: E0304 01:03:39.537325 2579 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:03:39.538003 kubelet[2579]: E0304 01:03:39.537511 2579 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6dbf4f54f5-flgh7" Mar 4 01:03:39.538003 kubelet[2579]: E0304 01:03:39.537540 2579 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6dbf4f54f5-flgh7" Mar 4 01:03:39.538003 kubelet[2579]: E0304 01:03:39.537626 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6dbf4f54f5-flgh7_calico-system(28010a71-1727-4efe-b343-de6e69fbd281)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6dbf4f54f5-flgh7_calico-system(28010a71-1727-4efe-b343-de6e69fbd281)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6dbf4f54f5-flgh7" podUID="28010a71-1727-4efe-b343-de6e69fbd281" Mar 4 01:03:39.615505 kubelet[2579]: I0304 01:03:39.612751 2579 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58" Mar 4 01:03:39.614980 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80-shm.mount: Deactivated successfully. Mar 4 01:03:39.628725 kubelet[2579]: I0304 01:03:39.628303 2579 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80" Mar 4 01:03:39.670078 containerd[1481]: time="2026-03-04T01:03:39.669743061Z" level=info msg="CreateContainer within sandbox \"d1036741b0920687876496de76d43050789fe2d7f825cc8764dcedd3e6f998ae\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 4 01:03:39.680856 containerd[1481]: time="2026-03-04T01:03:39.678747109Z" level=info msg="StopPodSandbox for \"cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80\"" Mar 4 01:03:39.681499 containerd[1481]: time="2026-03-04T01:03:39.681291899Z" level=info msg="StopPodSandbox for \"bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58\"" Mar 4 01:03:39.683119 containerd[1481]: time="2026-03-04T01:03:39.682999195Z" level=info msg="Ensure that sandbox cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80 in task-service has been cleanup successfully" Mar 4 01:03:39.685689 containerd[1481]: time="2026-03-04T01:03:39.685604530Z" level=info msg="Ensure that sandbox bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58 in task-service has been cleanup successfully" Mar 4 01:03:39.716180 containerd[1481]: time="2026-03-04T01:03:39.714615897Z" level=error msg="Failed to destroy network for sandbox \"98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:03:39.719105 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb-shm.mount: Deactivated successfully. Mar 4 01:03:39.723270 containerd[1481]: time="2026-03-04T01:03:39.723205481Z" level=error msg="encountered an error cleaning up failed sandbox \"98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:03:39.724839 containerd[1481]: time="2026-03-04T01:03:39.724712175Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66b64679fb-6fgtf,Uid:9c0b93c3-34c4-4c8f-bfc1-54f9448d999f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:03:39.730153 kubelet[2579]: E0304 01:03:39.727215 2579 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:03:39.730153 kubelet[2579]: E0304 01:03:39.727313 2579 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-66b64679fb-6fgtf" Mar 4 01:03:39.730153 kubelet[2579]: E0304 01:03:39.727533 2579 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-66b64679fb-6fgtf" Mar 4 01:03:39.730587 kubelet[2579]: E0304 01:03:39.727630 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66b64679fb-6fgtf_calico-system(9c0b93c3-34c4-4c8f-bfc1-54f9448d999f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66b64679fb-6fgtf_calico-system(9c0b93c3-34c4-4c8f-bfc1-54f9448d999f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-66b64679fb-6fgtf" podUID="9c0b93c3-34c4-4c8f-bfc1-54f9448d999f" Mar 4 01:03:39.761028 containerd[1481]: time="2026-03-04T01:03:39.760887440Z" level=info msg="CreateContainer within sandbox \"d1036741b0920687876496de76d43050789fe2d7f825cc8764dcedd3e6f998ae\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"002d9e4dbf1af72728376fabe53c76411ecb9ab3637120459f7661bc4fa58edd\"" Mar 4 01:03:39.766047 containerd[1481]: time="2026-03-04T01:03:39.765927497Z" level=info msg="StartContainer for \"002d9e4dbf1af72728376fabe53c76411ecb9ab3637120459f7661bc4fa58edd\"" Mar 4 01:03:39.781047 containerd[1481]: time="2026-03-04T01:03:39.780983018Z" level=error msg="Failed to destroy network for sandbox \"1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:03:39.782543 containerd[1481]: time="2026-03-04T01:03:39.782142970Z" level=error msg="encountered an error cleaning up failed sandbox \"1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:03:39.782543 containerd[1481]: time="2026-03-04T01:03:39.782222180Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-87f89c9c8-fnddc,Uid:ce96956a-db8a-428b-9f28-9b754235562e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:03:39.782950 kubelet[2579]: E0304 01:03:39.782825 2579 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:03:39.783031 kubelet[2579]: E0304 01:03:39.782982 2579 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-87f89c9c8-fnddc" Mar 4 01:03:39.783092 kubelet[2579]: E0304 01:03:39.783020 2579 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-87f89c9c8-fnddc" Mar 4 01:03:39.783478 kubelet[2579]: E0304 01:03:39.783116 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-87f89c9c8-fnddc_calico-system(ce96956a-db8a-428b-9f28-9b754235562e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-87f89c9c8-fnddc_calico-system(ce96956a-db8a-428b-9f28-9b754235562e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-87f89c9c8-fnddc" podUID="ce96956a-db8a-428b-9f28-9b754235562e" Mar 4 01:03:39.830574 containerd[1481]: time="2026-03-04T01:03:39.830162555Z" level=error msg="Failed to destroy network for sandbox \"01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:03:39.838593 containerd[1481]: time="2026-03-04T01:03:39.838033061Z" level=error msg="encountered an error cleaning up failed sandbox \"01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:03:39.838593 containerd[1481]: time="2026-03-04T01:03:39.838242304Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-whcgp,Uid:883fc4b1-6269-44ab-9fb7-38da3bb836eb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:03:39.838953 kubelet[2579]: E0304 01:03:39.838796 2579 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:03:39.838953 kubelet[2579]: E0304 01:03:39.838900 2579 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-whcgp" Mar 4 01:03:39.838953 kubelet[2579]: E0304 01:03:39.838943 2579 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-whcgp" Mar 4 01:03:39.839188 kubelet[2579]: E0304 01:03:39.839030 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-cccfbd5cf-whcgp_calico-system(883fc4b1-6269-44ab-9fb7-38da3bb836eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-cccfbd5cf-whcgp_calico-system(883fc4b1-6269-44ab-9fb7-38da3bb836eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-whcgp" podUID="883fc4b1-6269-44ab-9fb7-38da3bb836eb" Mar 4 01:03:39.847743 containerd[1481]: time="2026-03-04T01:03:39.847588367Z" level=error msg="Failed to destroy network for sandbox \"549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:03:39.850319 containerd[1481]: time="2026-03-04T01:03:39.848657159Z" level=error msg="encountered an error cleaning up failed sandbox \"549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:03:39.850319 containerd[1481]: time="2026-03-04T01:03:39.848761315Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-svjm8,Uid:0199c151-24ad-4cf2-ae91-b7e9b350322f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:03:39.850628 kubelet[2579]: E0304 01:03:39.849303 2579 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:03:39.851866 kubelet[2579]: E0304 01:03:39.851662 2579 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-svjm8" Mar 4 01:03:39.851866 kubelet[2579]: E0304 01:03:39.851823 2579 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-svjm8" Mar 4 01:03:39.852727 kubelet[2579]: E0304 01:03:39.852543 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-svjm8_kube-system(0199c151-24ad-4cf2-ae91-b7e9b350322f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-svjm8_kube-system(0199c151-24ad-4cf2-ae91-b7e9b350322f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-svjm8" podUID="0199c151-24ad-4cf2-ae91-b7e9b350322f" Mar 4 01:03:39.898283 containerd[1481]: time="2026-03-04T01:03:39.897800493Z" level=error msg="Failed to destroy network for sandbox \"e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:03:39.904981 containerd[1481]: time="2026-03-04T01:03:39.904857154Z" level=error msg="StopPodSandbox for \"bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58\" failed" error="failed to destroy network for sandbox \"bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:03:39.908576 containerd[1481]: time="2026-03-04T01:03:39.906211320Z" level=error msg="encountered an error cleaning up failed sandbox \"e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:03:39.908790 containerd[1481]: time="2026-03-04T01:03:39.908664307Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66b64679fb-hzv4z,Uid:55e105c4-cb80-455a-abe4-3f8ab66ac4c8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:03:39.910556 kubelet[2579]: E0304 01:03:39.910096 2579 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58" Mar 4 01:03:39.911087 kubelet[2579]: E0304 01:03:39.910087 2579 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:03:39.911562 kubelet[2579]: E0304 01:03:39.911530 2579 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-66b64679fb-hzv4z" Mar 4 01:03:39.911746 kubelet[2579]: E0304 01:03:39.911720 2579 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-66b64679fb-hzv4z" Mar 4 01:03:39.912165 kubelet[2579]: E0304 01:03:39.910914 2579 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58"} Mar 4 01:03:39.912715 kubelet[2579]: E0304 01:03:39.911980 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66b64679fb-hzv4z_calico-system(55e105c4-cb80-455a-abe4-3f8ab66ac4c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66b64679fb-hzv4z_calico-system(55e105c4-cb80-455a-abe4-3f8ab66ac4c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-66b64679fb-hzv4z" podUID="55e105c4-cb80-455a-abe4-3f8ab66ac4c8" Mar 4 01:03:39.912715 kubelet[2579]: E0304 01:03:39.912661 2579 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"28010a71-1727-4efe-b343-de6e69fbd281\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 4 01:03:39.912991 kubelet[2579]: E0304 01:03:39.912706 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"28010a71-1727-4efe-b343-de6e69fbd281\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6dbf4f54f5-flgh7" podUID="28010a71-1727-4efe-b343-de6e69fbd281" Mar 4 01:03:39.913909 containerd[1481]: time="2026-03-04T01:03:39.913741579Z" level=error msg="StopPodSandbox for \"cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80\" failed" error="failed to destroy network for sandbox \"cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:03:39.920454 kubelet[2579]: E0304 01:03:39.919792 2579 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80" Mar 4 01:03:39.920454 kubelet[2579]: E0304 01:03:39.919974 2579 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80"} Mar 4 01:03:39.920454 kubelet[2579]: E0304 01:03:39.920047 2579 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"24812854-f0ac-4651-986c-4d61a0df5440\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 4 01:03:39.920454 kubelet[2579]: E0304 01:03:39.920096 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"24812854-f0ac-4651-986c-4d61a0df5440\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rs6f4" podUID="24812854-f0ac-4651-986c-4d61a0df5440" Mar 4 01:03:39.925557 containerd[1481]: time="2026-03-04T01:03:39.925290979Z" level=error msg="Failed to destroy network for sandbox \"793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:03:39.926528 containerd[1481]: time="2026-03-04T01:03:39.926283419Z" level=error msg="encountered an error cleaning up failed sandbox \"793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:03:39.926789 containerd[1481]: time="2026-03-04T01:03:39.926588250Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rr9gl,Uid:ba3a3117-2ed7-420f-9281-01467babd9c7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:03:39.927791 kubelet[2579]: E0304 01:03:39.927752 2579 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:03:39.928817 kubelet[2579]: E0304 01:03:39.928554 2579 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-rr9gl" Mar 4 01:03:39.928817 kubelet[2579]: E0304 01:03:39.928591 2579 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-rr9gl" Mar 4 01:03:39.928817 kubelet[2579]: E0304 01:03:39.928751 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-rr9gl_kube-system(ba3a3117-2ed7-420f-9281-01467babd9c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-rr9gl_kube-system(ba3a3117-2ed7-420f-9281-01467babd9c7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-rr9gl" podUID="ba3a3117-2ed7-420f-9281-01467babd9c7" Mar 4 01:03:39.946282 systemd[1]: Started cri-containerd-002d9e4dbf1af72728376fabe53c76411ecb9ab3637120459f7661bc4fa58edd.scope - libcontainer container 002d9e4dbf1af72728376fabe53c76411ecb9ab3637120459f7661bc4fa58edd. Mar 4 01:03:40.032881 containerd[1481]: time="2026-03-04T01:03:40.032729730Z" level=info msg="StartContainer for \"002d9e4dbf1af72728376fabe53c76411ecb9ab3637120459f7661bc4fa58edd\" returns successfully" Mar 4 01:03:40.620661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2385674591.mount: Deactivated successfully. Mar 4 01:03:40.620878 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752-shm.mount: Deactivated successfully. Mar 4 01:03:40.620994 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4-shm.mount: Deactivated successfully. Mar 4 01:03:40.621112 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0-shm.mount: Deactivated successfully. Mar 4 01:03:40.621236 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93-shm.mount: Deactivated successfully. Mar 4 01:03:40.621612 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22-shm.mount: Deactivated successfully. Mar 4 01:03:40.653327 kubelet[2579]: I0304 01:03:40.652340 2579 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb" Mar 4 01:03:40.657647 containerd[1481]: time="2026-03-04T01:03:40.655527715Z" level=info msg="StopPodSandbox for \"98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb\"" Mar 4 01:03:40.657647 containerd[1481]: time="2026-03-04T01:03:40.656036309Z" level=info msg="Ensure that sandbox 98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb in task-service has been cleanup successfully" Mar 4 01:03:40.713535 kubelet[2579]: I0304 01:03:40.698974 2579 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0" Mar 4 01:03:40.725152 containerd[1481]: time="2026-03-04T01:03:40.724495754Z" level=info msg="StopPodSandbox for \"01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0\"" Mar 4 01:03:40.725152 containerd[1481]: time="2026-03-04T01:03:40.724812288Z" level=info msg="Ensure that sandbox 01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0 in task-service has been cleanup successfully" Mar 4 01:03:40.747594 kubelet[2579]: I0304 01:03:40.747093 2579 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22" Mar 4 01:03:40.753573 containerd[1481]: time="2026-03-04T01:03:40.752001168Z" level=info msg="StopPodSandbox for \"549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22\"" Mar 4 01:03:40.753573 containerd[1481]: time="2026-03-04T01:03:40.752235858Z" level=info msg="Ensure that sandbox 549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22 in task-service has been cleanup successfully" Mar 4 01:03:40.770536 kubelet[2579]: I0304 01:03:40.770209 2579 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752" Mar 4 01:03:40.780464 containerd[1481]: time="2026-03-04T01:03:40.778693953Z" level=info msg="StopPodSandbox for \"e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752\"" Mar 4 01:03:40.780464 containerd[1481]: time="2026-03-04T01:03:40.778980530Z" level=info msg="Ensure that sandbox e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752 in task-service has been cleanup successfully" Mar 4 01:03:40.797060 kubelet[2579]: I0304 01:03:40.796883 2579 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4" Mar 4 01:03:40.802085 containerd[1481]: time="2026-03-04T01:03:40.800143459Z" level=info msg="StopPodSandbox for \"793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4\"" Mar 4 01:03:40.802085 containerd[1481]: time="2026-03-04T01:03:40.800821329Z" level=info msg="Ensure that sandbox 793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4 in task-service has been cleanup successfully" Mar 4 01:03:40.875001 kubelet[2579]: I0304 01:03:40.874721 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-44r5c" podStartSLOduration=5.515834592 podStartE2EDuration="49.87468489s" podCreationTimestamp="2026-03-04 01:02:51 +0000 UTC" firstStartedPulling="2026-03-04 01:02:51.950095637 +0000 UTC m=+37.922157429" lastFinishedPulling="2026-03-04 01:03:36.308945935 +0000 UTC m=+82.281007727" observedRunningTime="2026-03-04 01:03:40.840548872 +0000 UTC m=+86.812610764" watchObservedRunningTime="2026-03-04 01:03:40.87468489 +0000 UTC m=+86.846746681" Mar 4 01:03:40.931000 containerd[1481]: time="2026-03-04T01:03:40.929489623Z" level=error msg="StopPodSandbox for \"01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0\" failed" error="failed to destroy network for sandbox \"01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:03:40.931159 kubelet[2579]: E0304 01:03:40.929963 2579 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0" Mar 4 01:03:40.931159 kubelet[2579]: E0304 01:03:40.930022 2579 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0"} Mar 4 01:03:40.931159 kubelet[2579]: E0304 01:03:40.930070 2579 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"883fc4b1-6269-44ab-9fb7-38da3bb836eb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 4 01:03:40.931159 kubelet[2579]: E0304 01:03:40.930109 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"883fc4b1-6269-44ab-9fb7-38da3bb836eb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-whcgp" podUID="883fc4b1-6269-44ab-9fb7-38da3bb836eb" Mar 4 01:03:40.934583 kubelet[2579]: I0304 01:03:40.933947 2579 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93" Mar 4 01:03:40.942827 containerd[1481]: time="2026-03-04T01:03:40.936205367Z" level=info msg="StopPodSandbox for \"1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93\"" Mar 4 01:03:40.942827 containerd[1481]: time="2026-03-04T01:03:40.940602224Z" level=info msg="Ensure that sandbox 1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93 in task-service has been cleanup successfully" Mar 4 01:03:41.016542 containerd[1481]: time="2026-03-04T01:03:41.014902111Z" level=error msg="StopPodSandbox for \"98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb\" failed" error="failed to destroy network for sandbox \"98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:03:41.016765 kubelet[2579]: E0304 01:03:41.015858 2579 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb" Mar 4 01:03:41.016765 kubelet[2579]: E0304 01:03:41.015919 2579 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb"} Mar 4 01:03:41.016765 kubelet[2579]: E0304 01:03:41.015970 2579 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9c0b93c3-34c4-4c8f-bfc1-54f9448d999f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 4 01:03:41.016765 kubelet[2579]: E0304 01:03:41.016015 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9c0b93c3-34c4-4c8f-bfc1-54f9448d999f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-66b64679fb-6fgtf" podUID="9c0b93c3-34c4-4c8f-bfc1-54f9448d999f" Mar 4 01:03:41.157019 containerd[1481]: time="2026-03-04T01:03:41.156801291Z" level=error msg="StopPodSandbox for \"1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93\" failed" error="failed to destroy network for sandbox \"1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:03:41.158193 kubelet[2579]: E0304 01:03:41.157176 2579 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93" Mar 4 01:03:41.158193 kubelet[2579]: E0304 01:03:41.157245 2579 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93"} Mar 4 01:03:41.158193 kubelet[2579]: E0304 01:03:41.157279 2579 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ce96956a-db8a-428b-9f28-9b754235562e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 4 01:03:41.158193 kubelet[2579]: E0304 01:03:41.157311 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ce96956a-db8a-428b-9f28-9b754235562e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-87f89c9c8-fnddc" podUID="ce96956a-db8a-428b-9f28-9b754235562e" Mar 4 01:03:41.166270 containerd[1481]: time="2026-03-04T01:03:41.166139797Z" level=error msg="StopPodSandbox for \"e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752\" failed" error="failed to destroy network for sandbox \"e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:03:41.166964 containerd[1481]: time="2026-03-04T01:03:41.166871318Z" level=error msg="StopPodSandbox for \"549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22\" failed" error="failed to destroy network for sandbox \"549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:03:41.167881 containerd[1481]: time="2026-03-04T01:03:41.167669984Z" level=error msg="StopPodSandbox for \"793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4\" failed" error="failed to destroy network for sandbox \"793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:03:41.169017 kubelet[2579]: E0304 01:03:41.168906 2579 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4" Mar 4 01:03:41.169088 kubelet[2579]: E0304 01:03:41.169020 2579 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4"} Mar 4 01:03:41.169088 kubelet[2579]: E0304 01:03:41.169057 2579 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ba3a3117-2ed7-420f-9281-01467babd9c7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 4 01:03:41.169219 kubelet[2579]: E0304 01:03:41.169056 2579 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22" Mar 4 01:03:41.169219 kubelet[2579]: E0304 01:03:41.169124 2579 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752" Mar 4 01:03:41.169219 kubelet[2579]: E0304 01:03:41.169143 2579 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752"} Mar 4 01:03:41.169219 kubelet[2579]: E0304 01:03:41.169134 2579 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22"} Mar 4 01:03:41.169219 kubelet[2579]: E0304 01:03:41.169158 2579 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"55e105c4-cb80-455a-abe4-3f8ab66ac4c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 4 01:03:41.169831 kubelet[2579]: E0304 01:03:41.169175 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"55e105c4-cb80-455a-abe4-3f8ab66ac4c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-66b64679fb-hzv4z" podUID="55e105c4-cb80-455a-abe4-3f8ab66ac4c8" Mar 4 01:03:41.169831 kubelet[2579]: E0304 01:03:41.169180 2579 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0199c151-24ad-4cf2-ae91-b7e9b350322f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 4 01:03:41.169831 kubelet[2579]: E0304 01:03:41.169221 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0199c151-24ad-4cf2-ae91-b7e9b350322f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-svjm8" podUID="0199c151-24ad-4cf2-ae91-b7e9b350322f" Mar 4 01:03:41.170216 kubelet[2579]: E0304 01:03:41.169091 2579 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ba3a3117-2ed7-420f-9281-01467babd9c7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-rr9gl" podUID="ba3a3117-2ed7-420f-9281-01467babd9c7" Mar 4 01:03:41.948651 containerd[1481]: time="2026-03-04T01:03:41.947989573Z" level=info msg="StopPodSandbox for \"1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93\"" Mar 4 01:03:42.077066 systemd[1]: run-containerd-runc-k8s.io-002d9e4dbf1af72728376fabe53c76411ecb9ab3637120459f7661bc4fa58edd-runc.489ShN.mount: Deactivated successfully. Mar 4 01:03:42.481227 containerd[1481]: 2026-03-04 01:03:42.321 [INFO][3993] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93" Mar 4 01:03:42.481227 containerd[1481]: 2026-03-04 01:03:42.324 [INFO][3993] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93" iface="eth0" netns="/var/run/netns/cni-1e272fe1-1518-6079-9df2-efaae80a59a8" Mar 4 01:03:42.481227 containerd[1481]: 2026-03-04 01:03:42.326 [INFO][3993] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93" iface="eth0" netns="/var/run/netns/cni-1e272fe1-1518-6079-9df2-efaae80a59a8" Mar 4 01:03:42.481227 containerd[1481]: 2026-03-04 01:03:42.329 [INFO][3993] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93" iface="eth0" netns="/var/run/netns/cni-1e272fe1-1518-6079-9df2-efaae80a59a8" Mar 4 01:03:42.481227 containerd[1481]: 2026-03-04 01:03:42.329 [INFO][3993] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93" Mar 4 01:03:42.481227 containerd[1481]: 2026-03-04 01:03:42.329 [INFO][3993] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93" Mar 4 01:03:42.481227 containerd[1481]: 2026-03-04 01:03:42.378 [INFO][4023] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93" HandleID="k8s-pod-network.1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93" Workload="localhost-k8s-whisker--87f89c9c8--fnddc-eth0" Mar 4 01:03:42.481227 containerd[1481]: 2026-03-04 01:03:42.380 [INFO][4023] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:03:42.481227 containerd[1481]: 2026-03-04 01:03:42.380 [INFO][4023] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:03:42.481227 containerd[1481]: 2026-03-04 01:03:42.415 [WARNING][4023] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93" HandleID="k8s-pod-network.1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93" Workload="localhost-k8s-whisker--87f89c9c8--fnddc-eth0" Mar 4 01:03:42.481227 containerd[1481]: 2026-03-04 01:03:42.415 [INFO][4023] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93" HandleID="k8s-pod-network.1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93" Workload="localhost-k8s-whisker--87f89c9c8--fnddc-eth0" Mar 4 01:03:42.481227 containerd[1481]: 2026-03-04 01:03:42.432 [INFO][4023] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:03:42.481227 containerd[1481]: 2026-03-04 01:03:42.472 [INFO][3993] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93" Mar 4 01:03:42.532433 systemd[1]: run-netns-cni\x2d1e272fe1\x2d1518\x2d6079\x2d9df2\x2defaae80a59a8.mount: Deactivated successfully. Mar 4 01:03:42.556229 containerd[1481]: time="2026-03-04T01:03:42.545722355Z" level=info msg="TearDown network for sandbox \"1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93\" successfully" Mar 4 01:03:42.556229 containerd[1481]: time="2026-03-04T01:03:42.547750975Z" level=info msg="StopPodSandbox for \"1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93\" returns successfully" Mar 4 01:03:42.849735 kubelet[2579]: I0304 01:03:42.848030 2579 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ce96956a-db8a-428b-9f28-9b754235562e-whisker-backend-key-pair\") pod \"ce96956a-db8a-428b-9f28-9b754235562e\" (UID: \"ce96956a-db8a-428b-9f28-9b754235562e\") " Mar 4 01:03:42.849735 kubelet[2579]: I0304 01:03:42.848099 2579 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce96956a-db8a-428b-9f28-9b754235562e-whisker-ca-bundle\") pod \"ce96956a-db8a-428b-9f28-9b754235562e\" (UID: \"ce96956a-db8a-428b-9f28-9b754235562e\") " Mar 4 01:03:42.849735 kubelet[2579]: I0304 01:03:42.848271 2579 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zgxb\" (UniqueName: \"kubernetes.io/projected/ce96956a-db8a-428b-9f28-9b754235562e-kube-api-access-7zgxb\") pod \"ce96956a-db8a-428b-9f28-9b754235562e\" (UID: \"ce96956a-db8a-428b-9f28-9b754235562e\") " Mar 4 01:03:42.849735 kubelet[2579]: I0304 01:03:42.848310 2579 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/ce96956a-db8a-428b-9f28-9b754235562e-nginx-config\") pod \"ce96956a-db8a-428b-9f28-9b754235562e\" (UID: \"ce96956a-db8a-428b-9f28-9b754235562e\") " Mar 4 01:03:42.852057 kubelet[2579]: I0304 01:03:42.851959 2579 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce96956a-db8a-428b-9f28-9b754235562e-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "ce96956a-db8a-428b-9f28-9b754235562e" (UID: "ce96956a-db8a-428b-9f28-9b754235562e"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 4 01:03:42.854418 kubelet[2579]: I0304 01:03:42.854218 2579 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce96956a-db8a-428b-9f28-9b754235562e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "ce96956a-db8a-428b-9f28-9b754235562e" (UID: "ce96956a-db8a-428b-9f28-9b754235562e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 4 01:03:42.867684 kubelet[2579]: I0304 01:03:42.867554 2579 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce96956a-db8a-428b-9f28-9b754235562e-kube-api-access-7zgxb" (OuterVolumeSpecName: "kube-api-access-7zgxb") pod "ce96956a-db8a-428b-9f28-9b754235562e" (UID: "ce96956a-db8a-428b-9f28-9b754235562e"). InnerVolumeSpecName "kube-api-access-7zgxb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 4 01:03:42.871902 systemd[1]: var-lib-kubelet-pods-ce96956a\x2ddb8a\x2d428b\x2d9f28\x2d9b754235562e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7zgxb.mount: Deactivated successfully. Mar 4 01:03:42.876886 kubelet[2579]: I0304 01:03:42.876790 2579 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce96956a-db8a-428b-9f28-9b754235562e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "ce96956a-db8a-428b-9f28-9b754235562e" (UID: "ce96956a-db8a-428b-9f28-9b754235562e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 4 01:03:42.880741 systemd[1]: var-lib-kubelet-pods-ce96956a\x2ddb8a\x2d428b\x2d9f28\x2d9b754235562e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 4 01:03:42.949054 kubelet[2579]: I0304 01:03:42.948933 2579 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/ce96956a-db8a-428b-9f28-9b754235562e-nginx-config\") on node \"localhost\" DevicePath \"\"" Mar 4 01:03:42.949249 kubelet[2579]: I0304 01:03:42.949029 2579 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ce96956a-db8a-428b-9f28-9b754235562e-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Mar 4 01:03:42.949249 kubelet[2579]: I0304 01:03:42.949110 2579 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce96956a-db8a-428b-9f28-9b754235562e-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Mar 4 01:03:42.949249 kubelet[2579]: I0304 01:03:42.949126 2579 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7zgxb\" (UniqueName: \"kubernetes.io/projected/ce96956a-db8a-428b-9f28-9b754235562e-kube-api-access-7zgxb\") on node \"localhost\" DevicePath \"\"" Mar 4 01:03:42.993000 systemd[1]: Removed slice kubepods-besteffort-podce96956a_db8a_428b_9f28_9b754235562e.slice - libcontainer container kubepods-besteffort-podce96956a_db8a_428b_9f28_9b754235562e.slice. Mar 4 01:03:43.241981 systemd[1]: Created slice kubepods-besteffort-pod26e0b3a6_31a9_4a63_b390_d6b035f8c176.slice - libcontainer container kubepods-besteffort-pod26e0b3a6_31a9_4a63_b390_d6b035f8c176.slice. Mar 4 01:03:43.252520 kubelet[2579]: I0304 01:03:43.252241 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zkrz\" (UniqueName: \"kubernetes.io/projected/26e0b3a6-31a9-4a63-b390-d6b035f8c176-kube-api-access-2zkrz\") pod \"whisker-6b5d885999-hlpns\" (UID: \"26e0b3a6-31a9-4a63-b390-d6b035f8c176\") " pod="calico-system/whisker-6b5d885999-hlpns" Mar 4 01:03:43.252520 kubelet[2579]: I0304 01:03:43.252298 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26e0b3a6-31a9-4a63-b390-d6b035f8c176-whisker-ca-bundle\") pod \"whisker-6b5d885999-hlpns\" (UID: \"26e0b3a6-31a9-4a63-b390-d6b035f8c176\") " pod="calico-system/whisker-6b5d885999-hlpns" Mar 4 01:03:43.252520 kubelet[2579]: I0304 01:03:43.252326 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/26e0b3a6-31a9-4a63-b390-d6b035f8c176-whisker-backend-key-pair\") pod \"whisker-6b5d885999-hlpns\" (UID: \"26e0b3a6-31a9-4a63-b390-d6b035f8c176\") " pod="calico-system/whisker-6b5d885999-hlpns" Mar 4 01:03:43.252520 kubelet[2579]: I0304 01:03:43.252643 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/26e0b3a6-31a9-4a63-b390-d6b035f8c176-nginx-config\") pod \"whisker-6b5d885999-hlpns\" (UID: \"26e0b3a6-31a9-4a63-b390-d6b035f8c176\") " pod="calico-system/whisker-6b5d885999-hlpns" Mar 4 01:03:43.566668 containerd[1481]: time="2026-03-04T01:03:43.564871536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b5d885999-hlpns,Uid:26e0b3a6-31a9-4a63-b390-d6b035f8c176,Namespace:calico-system,Attempt:0,}" Mar 4 01:03:43.671924 kernel: calico-node[4052]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 4 01:03:44.071616 systemd-networkd[1397]: calicc4a66b415b: Link UP Mar 4 01:03:44.072288 systemd-networkd[1397]: calicc4a66b415b: Gained carrier Mar 4 01:03:44.180799 containerd[1481]: 2026-03-04 01:03:43.727 [INFO][4172] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6b5d885999--hlpns-eth0 whisker-6b5d885999- calico-system 26e0b3a6-31a9-4a63-b390-d6b035f8c176 1081 0 2026-03-04 01:03:43 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6b5d885999 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6b5d885999-hlpns eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calicc4a66b415b [] [] }} ContainerID="ba84267a73730f9212426e8954eaa4f0a402720afaeba2882e843783711d52f1" Namespace="calico-system" Pod="whisker-6b5d885999-hlpns" WorkloadEndpoint="localhost-k8s-whisker--6b5d885999--hlpns-" Mar 4 01:03:44.180799 containerd[1481]: 2026-03-04 01:03:43.730 [INFO][4172] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ba84267a73730f9212426e8954eaa4f0a402720afaeba2882e843783711d52f1" Namespace="calico-system" Pod="whisker-6b5d885999-hlpns" WorkloadEndpoint="localhost-k8s-whisker--6b5d885999--hlpns-eth0" Mar 4 01:03:44.180799 containerd[1481]: 2026-03-04 01:03:43.838 [INFO][4191] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ba84267a73730f9212426e8954eaa4f0a402720afaeba2882e843783711d52f1" HandleID="k8s-pod-network.ba84267a73730f9212426e8954eaa4f0a402720afaeba2882e843783711d52f1" Workload="localhost-k8s-whisker--6b5d885999--hlpns-eth0" Mar 4 01:03:44.180799 containerd[1481]: 2026-03-04 01:03:43.860 [INFO][4191] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ba84267a73730f9212426e8954eaa4f0a402720afaeba2882e843783711d52f1" HandleID="k8s-pod-network.ba84267a73730f9212426e8954eaa4f0a402720afaeba2882e843783711d52f1" Workload="localhost-k8s-whisker--6b5d885999--hlpns-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00038be20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6b5d885999-hlpns", "timestamp":"2026-03-04 01:03:43.838922888 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002cec60)} Mar 4 01:03:44.180799 containerd[1481]: 2026-03-04 01:03:43.860 [INFO][4191] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:03:44.180799 containerd[1481]: 2026-03-04 01:03:43.860 [INFO][4191] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:03:44.180799 containerd[1481]: 2026-03-04 01:03:43.860 [INFO][4191] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 4 01:03:44.180799 containerd[1481]: 2026-03-04 01:03:43.874 [INFO][4191] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ba84267a73730f9212426e8954eaa4f0a402720afaeba2882e843783711d52f1" host="localhost" Mar 4 01:03:44.180799 containerd[1481]: 2026-03-04 01:03:43.909 [INFO][4191] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 4 01:03:44.180799 containerd[1481]: 2026-03-04 01:03:43.941 [INFO][4191] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 4 01:03:44.180799 containerd[1481]: 2026-03-04 01:03:43.951 [INFO][4191] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 4 01:03:44.180799 containerd[1481]: 2026-03-04 01:03:43.961 [INFO][4191] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 4 01:03:44.180799 containerd[1481]: 2026-03-04 01:03:43.961 [INFO][4191] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ba84267a73730f9212426e8954eaa4f0a402720afaeba2882e843783711d52f1" host="localhost" Mar 4 01:03:44.180799 containerd[1481]: 2026-03-04 01:03:43.970 [INFO][4191] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ba84267a73730f9212426e8954eaa4f0a402720afaeba2882e843783711d52f1 Mar 4 01:03:44.180799 containerd[1481]: 2026-03-04 01:03:43.985 [INFO][4191] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ba84267a73730f9212426e8954eaa4f0a402720afaeba2882e843783711d52f1" host="localhost" Mar 4 01:03:44.180799 containerd[1481]: 2026-03-04 01:03:44.017 [INFO][4191] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.ba84267a73730f9212426e8954eaa4f0a402720afaeba2882e843783711d52f1" host="localhost" Mar 4 01:03:44.180799 containerd[1481]: 2026-03-04 01:03:44.017 [INFO][4191] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.ba84267a73730f9212426e8954eaa4f0a402720afaeba2882e843783711d52f1" host="localhost" Mar 4 01:03:44.180799 containerd[1481]: 2026-03-04 01:03:44.017 [INFO][4191] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:03:44.180799 containerd[1481]: 2026-03-04 01:03:44.018 [INFO][4191] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="ba84267a73730f9212426e8954eaa4f0a402720afaeba2882e843783711d52f1" HandleID="k8s-pod-network.ba84267a73730f9212426e8954eaa4f0a402720afaeba2882e843783711d52f1" Workload="localhost-k8s-whisker--6b5d885999--hlpns-eth0" Mar 4 01:03:44.181881 containerd[1481]: 2026-03-04 01:03:44.027 [INFO][4172] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ba84267a73730f9212426e8954eaa4f0a402720afaeba2882e843783711d52f1" Namespace="calico-system" Pod="whisker-6b5d885999-hlpns" WorkloadEndpoint="localhost-k8s-whisker--6b5d885999--hlpns-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6b5d885999--hlpns-eth0", GenerateName:"whisker-6b5d885999-", Namespace:"calico-system", SelfLink:"", UID:"26e0b3a6-31a9-4a63-b390-d6b035f8c176", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 3, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6b5d885999", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6b5d885999-hlpns", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calicc4a66b415b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:03:44.181881 containerd[1481]: 2026-03-04 01:03:44.027 [INFO][4172] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="ba84267a73730f9212426e8954eaa4f0a402720afaeba2882e843783711d52f1" Namespace="calico-system" Pod="whisker-6b5d885999-hlpns" WorkloadEndpoint="localhost-k8s-whisker--6b5d885999--hlpns-eth0" Mar 4 01:03:44.181881 containerd[1481]: 2026-03-04 01:03:44.028 [INFO][4172] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicc4a66b415b ContainerID="ba84267a73730f9212426e8954eaa4f0a402720afaeba2882e843783711d52f1" Namespace="calico-system" Pod="whisker-6b5d885999-hlpns" WorkloadEndpoint="localhost-k8s-whisker--6b5d885999--hlpns-eth0" Mar 4 01:03:44.181881 containerd[1481]: 2026-03-04 01:03:44.080 [INFO][4172] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ba84267a73730f9212426e8954eaa4f0a402720afaeba2882e843783711d52f1" Namespace="calico-system" Pod="whisker-6b5d885999-hlpns" WorkloadEndpoint="localhost-k8s-whisker--6b5d885999--hlpns-eth0" Mar 4 01:03:44.181881 containerd[1481]: 2026-03-04 01:03:44.083 [INFO][4172] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ba84267a73730f9212426e8954eaa4f0a402720afaeba2882e843783711d52f1" Namespace="calico-system" Pod="whisker-6b5d885999-hlpns" WorkloadEndpoint="localhost-k8s-whisker--6b5d885999--hlpns-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6b5d885999--hlpns-eth0", GenerateName:"whisker-6b5d885999-", Namespace:"calico-system", SelfLink:"", UID:"26e0b3a6-31a9-4a63-b390-d6b035f8c176", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 3, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6b5d885999", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ba84267a73730f9212426e8954eaa4f0a402720afaeba2882e843783711d52f1", Pod:"whisker-6b5d885999-hlpns", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calicc4a66b415b", MAC:"56:0c:05:f8:1c:50", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:03:44.181881 containerd[1481]: 2026-03-04 01:03:44.149 [INFO][4172] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ba84267a73730f9212426e8954eaa4f0a402720afaeba2882e843783711d52f1" Namespace="calico-system" Pod="whisker-6b5d885999-hlpns" WorkloadEndpoint="localhost-k8s-whisker--6b5d885999--hlpns-eth0" Mar 4 01:03:44.333614 containerd[1481]: time="2026-03-04T01:03:44.329159529Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:03:44.333614 containerd[1481]: time="2026-03-04T01:03:44.329306895Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:03:44.333614 containerd[1481]: time="2026-03-04T01:03:44.329331591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:03:44.333614 containerd[1481]: time="2026-03-04T01:03:44.329603821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:03:44.417717 systemd[1]: Started cri-containerd-ba84267a73730f9212426e8954eaa4f0a402720afaeba2882e843783711d52f1.scope - libcontainer container ba84267a73730f9212426e8954eaa4f0a402720afaeba2882e843783711d52f1. Mar 4 01:03:44.484324 systemd-resolved[1339]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 4 01:03:44.546829 containerd[1481]: time="2026-03-04T01:03:44.546525806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b5d885999-hlpns,Uid:26e0b3a6-31a9-4a63-b390-d6b035f8c176,Namespace:calico-system,Attempt:0,} returns sandbox id \"ba84267a73730f9212426e8954eaa4f0a402720afaeba2882e843783711d52f1\"" Mar 4 01:03:44.551149 containerd[1481]: time="2026-03-04T01:03:44.551041123Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 4 01:03:44.625850 kubelet[2579]: I0304 01:03:44.625298 2579 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce96956a-db8a-428b-9f28-9b754235562e" path="/var/lib/kubelet/pods/ce96956a-db8a-428b-9f28-9b754235562e/volumes" Mar 4 01:03:44.914323 systemd-networkd[1397]: vxlan.calico: Link UP Mar 4 01:03:44.914340 systemd-networkd[1397]: vxlan.calico: Gained carrier Mar 4 01:03:45.421690 containerd[1481]: time="2026-03-04T01:03:45.421600401Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:03:45.425203 containerd[1481]: time="2026-03-04T01:03:45.424789119Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 4 01:03:45.429648 containerd[1481]: time="2026-03-04T01:03:45.427715704Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:03:45.435304 containerd[1481]: time="2026-03-04T01:03:45.433760939Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:03:45.435304 containerd[1481]: time="2026-03-04T01:03:45.435120429Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 883.983527ms" Mar 4 01:03:45.435304 containerd[1481]: time="2026-03-04T01:03:45.435236848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 4 01:03:45.457985 containerd[1481]: time="2026-03-04T01:03:45.457733363Z" level=info msg="CreateContainer within sandbox \"ba84267a73730f9212426e8954eaa4f0a402720afaeba2882e843783711d52f1\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 4 01:03:45.544120 systemd-networkd[1397]: calicc4a66b415b: Gained IPv6LL Mar 4 01:03:45.550616 containerd[1481]: time="2026-03-04T01:03:45.547610319Z" level=info msg="CreateContainer within sandbox \"ba84267a73730f9212426e8954eaa4f0a402720afaeba2882e843783711d52f1\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"17cbc2e5db6806bd29a5768c7b98feba175ffa0642e57fe36fd0b894466c458b\"" Mar 4 01:03:45.552815 containerd[1481]: time="2026-03-04T01:03:45.552654997Z" level=info msg="StartContainer for \"17cbc2e5db6806bd29a5768c7b98feba175ffa0642e57fe36fd0b894466c458b\"" Mar 4 01:03:45.685000 systemd[1]: run-containerd-runc-k8s.io-17cbc2e5db6806bd29a5768c7b98feba175ffa0642e57fe36fd0b894466c458b-runc.C4CkgK.mount: Deactivated successfully. Mar 4 01:03:45.714855 systemd[1]: Started cri-containerd-17cbc2e5db6806bd29a5768c7b98feba175ffa0642e57fe36fd0b894466c458b.scope - libcontainer container 17cbc2e5db6806bd29a5768c7b98feba175ffa0642e57fe36fd0b894466c458b. Mar 4 01:03:45.834876 containerd[1481]: time="2026-03-04T01:03:45.834814728Z" level=info msg="StartContainer for \"17cbc2e5db6806bd29a5768c7b98feba175ffa0642e57fe36fd0b894466c458b\" returns successfully" Mar 4 01:03:45.841416 containerd[1481]: time="2026-03-04T01:03:45.841045982Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 4 01:03:46.623313 kubelet[2579]: E0304 01:03:46.623045 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:46.624854 kubelet[2579]: E0304 01:03:46.624118 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:46.900838 systemd-networkd[1397]: vxlan.calico: Gained IPv6LL Mar 4 01:03:47.829199 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount335722136.mount: Deactivated successfully. Mar 4 01:03:47.879806 containerd[1481]: time="2026-03-04T01:03:47.879716730Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:03:47.883249 containerd[1481]: time="2026-03-04T01:03:47.883066015Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 4 01:03:47.886768 containerd[1481]: time="2026-03-04T01:03:47.886070323Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:03:47.896468 containerd[1481]: time="2026-03-04T01:03:47.896081373Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:03:47.897152 containerd[1481]: time="2026-03-04T01:03:47.897062056Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 2.055769382s" Mar 4 01:03:47.897152 containerd[1481]: time="2026-03-04T01:03:47.897108052Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 4 01:03:47.908288 containerd[1481]: time="2026-03-04T01:03:47.908120500Z" level=info msg="CreateContainer within sandbox \"ba84267a73730f9212426e8954eaa4f0a402720afaeba2882e843783711d52f1\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 4 01:03:48.004163 containerd[1481]: time="2026-03-04T01:03:48.003581563Z" level=info msg="CreateContainer within sandbox \"ba84267a73730f9212426e8954eaa4f0a402720afaeba2882e843783711d52f1\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"890a27521371f2f5bda54ff9ba8babc70bf75ccb6eefc4bb5096d474b929568f\"" Mar 4 01:03:48.010725 containerd[1481]: time="2026-03-04T01:03:48.010458652Z" level=info msg="StartContainer for \"890a27521371f2f5bda54ff9ba8babc70bf75ccb6eefc4bb5096d474b929568f\"" Mar 4 01:03:48.166453 systemd[1]: Started cri-containerd-890a27521371f2f5bda54ff9ba8babc70bf75ccb6eefc4bb5096d474b929568f.scope - libcontainer container 890a27521371f2f5bda54ff9ba8babc70bf75ccb6eefc4bb5096d474b929568f. Mar 4 01:03:48.313505 containerd[1481]: time="2026-03-04T01:03:48.312310717Z" level=info msg="StartContainer for \"890a27521371f2f5bda54ff9ba8babc70bf75ccb6eefc4bb5096d474b929568f\" returns successfully" Mar 4 01:03:48.623148 kubelet[2579]: E0304 01:03:48.622935 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:49.239141 kubelet[2579]: I0304 01:03:49.238506 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6b5d885999-hlpns" podStartSLOduration=2.890200848 podStartE2EDuration="6.238483536s" podCreationTimestamp="2026-03-04 01:03:43 +0000 UTC" firstStartedPulling="2026-03-04 01:03:44.550561527 +0000 UTC m=+90.522623319" lastFinishedPulling="2026-03-04 01:03:47.898844214 +0000 UTC m=+93.870906007" observedRunningTime="2026-03-04 01:03:49.181790875 +0000 UTC m=+95.153852696" watchObservedRunningTime="2026-03-04 01:03:49.238483536 +0000 UTC m=+95.210545328" Mar 4 01:03:50.869858 systemd[1]: Started sshd@7-10.0.0.50:22-10.0.0.1:40760.service - OpenSSH per-connection server daemon (10.0.0.1:40760). Mar 4 01:03:51.114306 sshd[4449]: Accepted publickey for core from 10.0.0.1 port 40760 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:03:51.144435 sshd[4449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:03:51.274250 systemd-logind[1450]: New session 8 of user core. Mar 4 01:03:51.286898 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 4 01:03:51.632539 containerd[1481]: time="2026-03-04T01:03:51.632000305Z" level=info msg="StopPodSandbox for \"e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752\"" Mar 4 01:03:52.707691 sshd[4449]: pam_unix(sshd:session): session closed for user core Mar 4 01:03:52.732184 systemd[1]: sshd@7-10.0.0.50:22-10.0.0.1:40760.service: Deactivated successfully. Mar 4 01:03:52.747976 systemd[1]: session-8.scope: Deactivated successfully. Mar 4 01:03:52.756186 containerd[1481]: 2026-03-04 01:03:52.032 [INFO][4482] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752" Mar 4 01:03:52.756186 containerd[1481]: 2026-03-04 01:03:52.039 [INFO][4482] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752" iface="eth0" netns="/var/run/netns/cni-95e5c780-3097-8b52-200e-a38629e09551" Mar 4 01:03:52.756186 containerd[1481]: 2026-03-04 01:03:52.059 [INFO][4482] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752" iface="eth0" netns="/var/run/netns/cni-95e5c780-3097-8b52-200e-a38629e09551" Mar 4 01:03:52.756186 containerd[1481]: 2026-03-04 01:03:52.076 [INFO][4482] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752" iface="eth0" netns="/var/run/netns/cni-95e5c780-3097-8b52-200e-a38629e09551" Mar 4 01:03:52.756186 containerd[1481]: 2026-03-04 01:03:52.078 [INFO][4482] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752" Mar 4 01:03:52.756186 containerd[1481]: 2026-03-04 01:03:52.078 [INFO][4482] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752" Mar 4 01:03:52.756186 containerd[1481]: 2026-03-04 01:03:52.667 [INFO][4492] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752" HandleID="k8s-pod-network.e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752" Workload="localhost-k8s-calico--apiserver--66b64679fb--hzv4z-eth0" Mar 4 01:03:52.756186 containerd[1481]: 2026-03-04 01:03:52.667 [INFO][4492] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:03:52.756186 containerd[1481]: 2026-03-04 01:03:52.667 [INFO][4492] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:03:52.756186 containerd[1481]: 2026-03-04 01:03:52.707 [WARNING][4492] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752" HandleID="k8s-pod-network.e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752" Workload="localhost-k8s-calico--apiserver--66b64679fb--hzv4z-eth0" Mar 4 01:03:52.756186 containerd[1481]: 2026-03-04 01:03:52.708 [INFO][4492] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752" HandleID="k8s-pod-network.e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752" Workload="localhost-k8s-calico--apiserver--66b64679fb--hzv4z-eth0" Mar 4 01:03:52.756186 containerd[1481]: 2026-03-04 01:03:52.727 [INFO][4492] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:03:52.756186 containerd[1481]: 2026-03-04 01:03:52.734 [INFO][4482] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752" Mar 4 01:03:52.756186 containerd[1481]: time="2026-03-04T01:03:52.755849506Z" level=info msg="TearDown network for sandbox \"e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752\" successfully" Mar 4 01:03:52.756186 containerd[1481]: time="2026-03-04T01:03:52.755901624Z" level=info msg="StopPodSandbox for \"e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752\" returns successfully" Mar 4 01:03:52.758158 systemd[1]: run-netns-cni\x2d95e5c780\x2d3097\x2d8b52\x2d200e\x2da38629e09551.mount: Deactivated successfully. Mar 4 01:03:52.763000 systemd-logind[1450]: Session 8 logged out. Waiting for processes to exit. Mar 4 01:03:52.767664 systemd-logind[1450]: Removed session 8. Mar 4 01:03:52.780792 containerd[1481]: time="2026-03-04T01:03:52.780037263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66b64679fb-hzv4z,Uid:55e105c4-cb80-455a-abe4-3f8ab66ac4c8,Namespace:calico-system,Attempt:1,}" Mar 4 01:03:53.282698 systemd-networkd[1397]: cali80bc3f7dafa: Link UP Mar 4 01:03:53.284198 systemd-networkd[1397]: cali80bc3f7dafa: Gained carrier Mar 4 01:03:53.316460 containerd[1481]: 2026-03-04 01:03:53.001 [INFO][4508] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--66b64679fb--hzv4z-eth0 calico-apiserver-66b64679fb- calico-system 55e105c4-cb80-455a-abe4-3f8ab66ac4c8 1170 0 2026-03-04 01:02:50 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:66b64679fb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-66b64679fb-hzv4z eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali80bc3f7dafa [] [] }} ContainerID="681da3428fbae16e0316f4f336398533eb5b7b85080af4fcf42c48a39a4e1e34" Namespace="calico-system" Pod="calico-apiserver-66b64679fb-hzv4z" WorkloadEndpoint="localhost-k8s-calico--apiserver--66b64679fb--hzv4z-" Mar 4 01:03:53.316460 containerd[1481]: 2026-03-04 01:03:53.001 [INFO][4508] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="681da3428fbae16e0316f4f336398533eb5b7b85080af4fcf42c48a39a4e1e34" Namespace="calico-system" Pod="calico-apiserver-66b64679fb-hzv4z" WorkloadEndpoint="localhost-k8s-calico--apiserver--66b64679fb--hzv4z-eth0" Mar 4 01:03:53.316460 containerd[1481]: 2026-03-04 01:03:53.113 [INFO][4519] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="681da3428fbae16e0316f4f336398533eb5b7b85080af4fcf42c48a39a4e1e34" HandleID="k8s-pod-network.681da3428fbae16e0316f4f336398533eb5b7b85080af4fcf42c48a39a4e1e34" Workload="localhost-k8s-calico--apiserver--66b64679fb--hzv4z-eth0" Mar 4 01:03:53.316460 containerd[1481]: 2026-03-04 01:03:53.139 [INFO][4519] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="681da3428fbae16e0316f4f336398533eb5b7b85080af4fcf42c48a39a4e1e34" HandleID="k8s-pod-network.681da3428fbae16e0316f4f336398533eb5b7b85080af4fcf42c48a39a4e1e34" Workload="localhost-k8s-calico--apiserver--66b64679fb--hzv4z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139130), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-66b64679fb-hzv4z", "timestamp":"2026-03-04 01:03:53.113550694 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000460420)} Mar 4 01:03:53.316460 containerd[1481]: 2026-03-04 01:03:53.140 [INFO][4519] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:03:53.316460 containerd[1481]: 2026-03-04 01:03:53.140 [INFO][4519] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:03:53.316460 containerd[1481]: 2026-03-04 01:03:53.140 [INFO][4519] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 4 01:03:53.316460 containerd[1481]: 2026-03-04 01:03:53.153 [INFO][4519] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.681da3428fbae16e0316f4f336398533eb5b7b85080af4fcf42c48a39a4e1e34" host="localhost" Mar 4 01:03:53.316460 containerd[1481]: 2026-03-04 01:03:53.174 [INFO][4519] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 4 01:03:53.316460 containerd[1481]: 2026-03-04 01:03:53.201 [INFO][4519] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 4 01:03:53.316460 containerd[1481]: 2026-03-04 01:03:53.209 [INFO][4519] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 4 01:03:53.316460 containerd[1481]: 2026-03-04 01:03:53.217 [INFO][4519] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 4 01:03:53.316460 containerd[1481]: 2026-03-04 01:03:53.218 [INFO][4519] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.681da3428fbae16e0316f4f336398533eb5b7b85080af4fcf42c48a39a4e1e34" host="localhost" Mar 4 01:03:53.316460 containerd[1481]: 2026-03-04 01:03:53.229 [INFO][4519] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.681da3428fbae16e0316f4f336398533eb5b7b85080af4fcf42c48a39a4e1e34 Mar 4 01:03:53.316460 containerd[1481]: 2026-03-04 01:03:53.242 [INFO][4519] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.681da3428fbae16e0316f4f336398533eb5b7b85080af4fcf42c48a39a4e1e34" host="localhost" Mar 4 01:03:53.316460 containerd[1481]: 2026-03-04 01:03:53.264 [INFO][4519] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.681da3428fbae16e0316f4f336398533eb5b7b85080af4fcf42c48a39a4e1e34" host="localhost" Mar 4 01:03:53.316460 containerd[1481]: 2026-03-04 01:03:53.264 [INFO][4519] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.681da3428fbae16e0316f4f336398533eb5b7b85080af4fcf42c48a39a4e1e34" host="localhost" Mar 4 01:03:53.316460 containerd[1481]: 2026-03-04 01:03:53.264 [INFO][4519] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:03:53.316460 containerd[1481]: 2026-03-04 01:03:53.264 [INFO][4519] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="681da3428fbae16e0316f4f336398533eb5b7b85080af4fcf42c48a39a4e1e34" HandleID="k8s-pod-network.681da3428fbae16e0316f4f336398533eb5b7b85080af4fcf42c48a39a4e1e34" Workload="localhost-k8s-calico--apiserver--66b64679fb--hzv4z-eth0" Mar 4 01:03:53.317761 containerd[1481]: 2026-03-04 01:03:53.272 [INFO][4508] cni-plugin/k8s.go 418: Populated endpoint ContainerID="681da3428fbae16e0316f4f336398533eb5b7b85080af4fcf42c48a39a4e1e34" Namespace="calico-system" Pod="calico-apiserver-66b64679fb-hzv4z" WorkloadEndpoint="localhost-k8s-calico--apiserver--66b64679fb--hzv4z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66b64679fb--hzv4z-eth0", GenerateName:"calico-apiserver-66b64679fb-", Namespace:"calico-system", SelfLink:"", UID:"55e105c4-cb80-455a-abe4-3f8ab66ac4c8", ResourceVersion:"1170", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 2, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66b64679fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-66b64679fb-hzv4z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali80bc3f7dafa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:03:53.317761 containerd[1481]: 2026-03-04 01:03:53.273 [INFO][4508] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="681da3428fbae16e0316f4f336398533eb5b7b85080af4fcf42c48a39a4e1e34" Namespace="calico-system" Pod="calico-apiserver-66b64679fb-hzv4z" WorkloadEndpoint="localhost-k8s-calico--apiserver--66b64679fb--hzv4z-eth0" Mar 4 01:03:53.317761 containerd[1481]: 2026-03-04 01:03:53.273 [INFO][4508] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali80bc3f7dafa ContainerID="681da3428fbae16e0316f4f336398533eb5b7b85080af4fcf42c48a39a4e1e34" Namespace="calico-system" Pod="calico-apiserver-66b64679fb-hzv4z" WorkloadEndpoint="localhost-k8s-calico--apiserver--66b64679fb--hzv4z-eth0" Mar 4 01:03:53.317761 containerd[1481]: 2026-03-04 01:03:53.283 [INFO][4508] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="681da3428fbae16e0316f4f336398533eb5b7b85080af4fcf42c48a39a4e1e34" Namespace="calico-system" Pod="calico-apiserver-66b64679fb-hzv4z" WorkloadEndpoint="localhost-k8s-calico--apiserver--66b64679fb--hzv4z-eth0" Mar 4 01:03:53.317761 containerd[1481]: 2026-03-04 01:03:53.286 [INFO][4508] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="681da3428fbae16e0316f4f336398533eb5b7b85080af4fcf42c48a39a4e1e34" Namespace="calico-system" Pod="calico-apiserver-66b64679fb-hzv4z" WorkloadEndpoint="localhost-k8s-calico--apiserver--66b64679fb--hzv4z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66b64679fb--hzv4z-eth0", GenerateName:"calico-apiserver-66b64679fb-", Namespace:"calico-system", SelfLink:"", UID:"55e105c4-cb80-455a-abe4-3f8ab66ac4c8", ResourceVersion:"1170", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 2, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66b64679fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"681da3428fbae16e0316f4f336398533eb5b7b85080af4fcf42c48a39a4e1e34", Pod:"calico-apiserver-66b64679fb-hzv4z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali80bc3f7dafa", MAC:"96:13:89:84:79:38", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:03:53.317761 containerd[1481]: 2026-03-04 01:03:53.306 [INFO][4508] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="681da3428fbae16e0316f4f336398533eb5b7b85080af4fcf42c48a39a4e1e34" Namespace="calico-system" Pod="calico-apiserver-66b64679fb-hzv4z" WorkloadEndpoint="localhost-k8s-calico--apiserver--66b64679fb--hzv4z-eth0" Mar 4 01:03:53.371826 containerd[1481]: time="2026-03-04T01:03:53.371503073Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:03:53.373776 containerd[1481]: time="2026-03-04T01:03:53.373551992Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:03:53.373885 containerd[1481]: time="2026-03-04T01:03:53.373732359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:03:53.375485 containerd[1481]: time="2026-03-04T01:03:53.373959595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:03:53.458139 systemd[1]: Started cri-containerd-681da3428fbae16e0316f4f336398533eb5b7b85080af4fcf42c48a39a4e1e34.scope - libcontainer container 681da3428fbae16e0316f4f336398533eb5b7b85080af4fcf42c48a39a4e1e34. Mar 4 01:03:53.533465 systemd-resolved[1339]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 4 01:03:53.590287 containerd[1481]: time="2026-03-04T01:03:53.590082394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66b64679fb-hzv4z,Uid:55e105c4-cb80-455a-abe4-3f8ab66ac4c8,Namespace:calico-system,Attempt:1,} returns sandbox id \"681da3428fbae16e0316f4f336398533eb5b7b85080af4fcf42c48a39a4e1e34\"" Mar 4 01:03:53.598036 containerd[1481]: time="2026-03-04T01:03:53.597838386Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 4 01:03:53.622049 containerd[1481]: time="2026-03-04T01:03:53.621904601Z" level=info msg="StopPodSandbox for \"cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80\"" Mar 4 01:03:53.626248 containerd[1481]: time="2026-03-04T01:03:53.625925072Z" level=info msg="StopPodSandbox for \"bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58\"" Mar 4 01:03:53.905916 containerd[1481]: 2026-03-04 01:03:53.769 [INFO][4610] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58" Mar 4 01:03:53.905916 containerd[1481]: 2026-03-04 01:03:53.769 [INFO][4610] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58" iface="eth0" netns="/var/run/netns/cni-1e523d1e-1c80-f035-f622-0887c605a552" Mar 4 01:03:53.905916 containerd[1481]: 2026-03-04 01:03:53.770 [INFO][4610] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58" iface="eth0" netns="/var/run/netns/cni-1e523d1e-1c80-f035-f622-0887c605a552" Mar 4 01:03:53.905916 containerd[1481]: 2026-03-04 01:03:53.778 [INFO][4610] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58" iface="eth0" netns="/var/run/netns/cni-1e523d1e-1c80-f035-f622-0887c605a552" Mar 4 01:03:53.905916 containerd[1481]: 2026-03-04 01:03:53.778 [INFO][4610] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58" Mar 4 01:03:53.905916 containerd[1481]: 2026-03-04 01:03:53.778 [INFO][4610] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58" Mar 4 01:03:53.905916 containerd[1481]: 2026-03-04 01:03:53.870 [INFO][4625] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58" HandleID="k8s-pod-network.bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58" Workload="localhost-k8s-calico--kube--controllers--6dbf4f54f5--flgh7-eth0" Mar 4 01:03:53.905916 containerd[1481]: 2026-03-04 01:03:53.871 [INFO][4625] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:03:53.905916 containerd[1481]: 2026-03-04 01:03:53.871 [INFO][4625] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:03:53.905916 containerd[1481]: 2026-03-04 01:03:53.886 [WARNING][4625] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58" HandleID="k8s-pod-network.bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58" Workload="localhost-k8s-calico--kube--controllers--6dbf4f54f5--flgh7-eth0" Mar 4 01:03:53.905916 containerd[1481]: 2026-03-04 01:03:53.886 [INFO][4625] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58" HandleID="k8s-pod-network.bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58" Workload="localhost-k8s-calico--kube--controllers--6dbf4f54f5--flgh7-eth0" Mar 4 01:03:53.905916 containerd[1481]: 2026-03-04 01:03:53.897 [INFO][4625] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:03:53.905916 containerd[1481]: 2026-03-04 01:03:53.902 [INFO][4610] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58" Mar 4 01:03:53.936778 containerd[1481]: time="2026-03-04T01:03:53.935460186Z" level=info msg="TearDown network for sandbox \"bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58\" successfully" Mar 4 01:03:53.936778 containerd[1481]: time="2026-03-04T01:03:53.935783290Z" level=info msg="StopPodSandbox for \"bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58\" returns successfully" Mar 4 01:03:53.950728 systemd[1]: run-netns-cni\x2d1e523d1e\x2d1c80\x2df035\x2df622\x2d0887c605a552.mount: Deactivated successfully. Mar 4 01:03:53.960068 containerd[1481]: time="2026-03-04T01:03:53.959846174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dbf4f54f5-flgh7,Uid:28010a71-1727-4efe-b343-de6e69fbd281,Namespace:calico-system,Attempt:1,}" Mar 4 01:03:53.986133 containerd[1481]: 2026-03-04 01:03:53.811 [INFO][4609] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80" Mar 4 01:03:53.986133 containerd[1481]: 2026-03-04 01:03:53.811 [INFO][4609] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80" iface="eth0" netns="/var/run/netns/cni-bcdf682f-e5bd-2448-691c-3b7940c38dc9" Mar 4 01:03:53.986133 containerd[1481]: 2026-03-04 01:03:53.813 [INFO][4609] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80" iface="eth0" netns="/var/run/netns/cni-bcdf682f-e5bd-2448-691c-3b7940c38dc9" Mar 4 01:03:53.986133 containerd[1481]: 2026-03-04 01:03:53.813 [INFO][4609] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80" iface="eth0" netns="/var/run/netns/cni-bcdf682f-e5bd-2448-691c-3b7940c38dc9" Mar 4 01:03:53.986133 containerd[1481]: 2026-03-04 01:03:53.814 [INFO][4609] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80" Mar 4 01:03:53.986133 containerd[1481]: 2026-03-04 01:03:53.814 [INFO][4609] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80" Mar 4 01:03:53.986133 containerd[1481]: 2026-03-04 01:03:53.904 [INFO][4631] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80" HandleID="k8s-pod-network.cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80" Workload="localhost-k8s-csi--node--driver--rs6f4-eth0" Mar 4 01:03:53.986133 containerd[1481]: 2026-03-04 01:03:53.905 [INFO][4631] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:03:53.986133 containerd[1481]: 2026-03-04 01:03:53.905 [INFO][4631] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:03:53.986133 containerd[1481]: 2026-03-04 01:03:53.965 [WARNING][4631] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80" HandleID="k8s-pod-network.cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80" Workload="localhost-k8s-csi--node--driver--rs6f4-eth0" Mar 4 01:03:53.986133 containerd[1481]: 2026-03-04 01:03:53.965 [INFO][4631] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80" HandleID="k8s-pod-network.cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80" Workload="localhost-k8s-csi--node--driver--rs6f4-eth0" Mar 4 01:03:53.986133 containerd[1481]: 2026-03-04 01:03:53.973 [INFO][4631] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:03:53.986133 containerd[1481]: 2026-03-04 01:03:53.980 [INFO][4609] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80" Mar 4 01:03:53.988818 containerd[1481]: time="2026-03-04T01:03:53.987999840Z" level=info msg="TearDown network for sandbox \"cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80\" successfully" Mar 4 01:03:53.988818 containerd[1481]: time="2026-03-04T01:03:53.988041348Z" level=info msg="StopPodSandbox for \"cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80\" returns successfully" Mar 4 01:03:53.994818 systemd[1]: run-netns-cni\x2dbcdf682f\x2de5bd\x2d2448\x2d691c\x2d3b7940c38dc9.mount: Deactivated successfully. Mar 4 01:03:54.003556 containerd[1481]: time="2026-03-04T01:03:54.003508428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rs6f4,Uid:24812854-f0ac-4651-986c-4d61a0df5440,Namespace:calico-system,Attempt:1,}" Mar 4 01:03:54.626996 containerd[1481]: time="2026-03-04T01:03:54.626777486Z" level=info msg="StopPodSandbox for \"793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4\"" Mar 4 01:03:54.635505 systemd-networkd[1397]: cali2af97af437d: Link UP Mar 4 01:03:54.639675 systemd-networkd[1397]: cali2af97af437d: Gained carrier Mar 4 01:03:54.713173 containerd[1481]: 2026-03-04 01:03:54.220 [INFO][4641] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6dbf4f54f5--flgh7-eth0 calico-kube-controllers-6dbf4f54f5- calico-system 28010a71-1727-4efe-b343-de6e69fbd281 1179 0 2026-03-04 01:02:51 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6dbf4f54f5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6dbf4f54f5-flgh7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2af97af437d [] [] }} ContainerID="5c51a434e0c14b8b73a1cb97d58470255d891cf4da2baba50416ff8fa3e8496d" Namespace="calico-system" Pod="calico-kube-controllers-6dbf4f54f5-flgh7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6dbf4f54f5--flgh7-" Mar 4 01:03:54.713173 containerd[1481]: 2026-03-04 01:03:54.221 [INFO][4641] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5c51a434e0c14b8b73a1cb97d58470255d891cf4da2baba50416ff8fa3e8496d" Namespace="calico-system" Pod="calico-kube-controllers-6dbf4f54f5-flgh7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6dbf4f54f5--flgh7-eth0" Mar 4 01:03:54.713173 containerd[1481]: 2026-03-04 01:03:54.367 [INFO][4669] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5c51a434e0c14b8b73a1cb97d58470255d891cf4da2baba50416ff8fa3e8496d" HandleID="k8s-pod-network.5c51a434e0c14b8b73a1cb97d58470255d891cf4da2baba50416ff8fa3e8496d" Workload="localhost-k8s-calico--kube--controllers--6dbf4f54f5--flgh7-eth0" Mar 4 01:03:54.713173 containerd[1481]: 2026-03-04 01:03:54.415 [INFO][4669] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5c51a434e0c14b8b73a1cb97d58470255d891cf4da2baba50416ff8fa3e8496d" HandleID="k8s-pod-network.5c51a434e0c14b8b73a1cb97d58470255d891cf4da2baba50416ff8fa3e8496d" Workload="localhost-k8s-calico--kube--controllers--6dbf4f54f5--flgh7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003f1730), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6dbf4f54f5-flgh7", "timestamp":"2026-03-04 01:03:54.367319752 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000596000)} Mar 4 01:03:54.713173 containerd[1481]: 2026-03-04 01:03:54.415 [INFO][4669] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:03:54.713173 containerd[1481]: 2026-03-04 01:03:54.415 [INFO][4669] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:03:54.713173 containerd[1481]: 2026-03-04 01:03:54.416 [INFO][4669] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 4 01:03:54.713173 containerd[1481]: 2026-03-04 01:03:54.427 [INFO][4669] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5c51a434e0c14b8b73a1cb97d58470255d891cf4da2baba50416ff8fa3e8496d" host="localhost" Mar 4 01:03:54.713173 containerd[1481]: 2026-03-04 01:03:54.454 [INFO][4669] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 4 01:03:54.713173 containerd[1481]: 2026-03-04 01:03:54.478 [INFO][4669] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 4 01:03:54.713173 containerd[1481]: 2026-03-04 01:03:54.527 [INFO][4669] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 4 01:03:54.713173 containerd[1481]: 2026-03-04 01:03:54.538 [INFO][4669] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 4 01:03:54.713173 containerd[1481]: 2026-03-04 01:03:54.539 [INFO][4669] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5c51a434e0c14b8b73a1cb97d58470255d891cf4da2baba50416ff8fa3e8496d" host="localhost" Mar 4 01:03:54.713173 containerd[1481]: 2026-03-04 01:03:54.554 [INFO][4669] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5c51a434e0c14b8b73a1cb97d58470255d891cf4da2baba50416ff8fa3e8496d Mar 4 01:03:54.713173 containerd[1481]: 2026-03-04 01:03:54.574 [INFO][4669] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5c51a434e0c14b8b73a1cb97d58470255d891cf4da2baba50416ff8fa3e8496d" host="localhost" Mar 4 01:03:54.713173 containerd[1481]: 2026-03-04 01:03:54.609 [INFO][4669] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.5c51a434e0c14b8b73a1cb97d58470255d891cf4da2baba50416ff8fa3e8496d" host="localhost" Mar 4 01:03:54.713173 containerd[1481]: 2026-03-04 01:03:54.609 [INFO][4669] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.5c51a434e0c14b8b73a1cb97d58470255d891cf4da2baba50416ff8fa3e8496d" host="localhost" Mar 4 01:03:54.713173 containerd[1481]: 2026-03-04 01:03:54.609 [INFO][4669] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:03:54.713173 containerd[1481]: 2026-03-04 01:03:54.609 [INFO][4669] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="5c51a434e0c14b8b73a1cb97d58470255d891cf4da2baba50416ff8fa3e8496d" HandleID="k8s-pod-network.5c51a434e0c14b8b73a1cb97d58470255d891cf4da2baba50416ff8fa3e8496d" Workload="localhost-k8s-calico--kube--controllers--6dbf4f54f5--flgh7-eth0" Mar 4 01:03:54.727295 containerd[1481]: 2026-03-04 01:03:54.617 [INFO][4641] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5c51a434e0c14b8b73a1cb97d58470255d891cf4da2baba50416ff8fa3e8496d" Namespace="calico-system" Pod="calico-kube-controllers-6dbf4f54f5-flgh7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6dbf4f54f5--flgh7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6dbf4f54f5--flgh7-eth0", GenerateName:"calico-kube-controllers-6dbf4f54f5-", Namespace:"calico-system", SelfLink:"", UID:"28010a71-1727-4efe-b343-de6e69fbd281", ResourceVersion:"1179", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 2, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6dbf4f54f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6dbf4f54f5-flgh7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2af97af437d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:03:54.727295 containerd[1481]: 2026-03-04 01:03:54.618 [INFO][4641] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="5c51a434e0c14b8b73a1cb97d58470255d891cf4da2baba50416ff8fa3e8496d" Namespace="calico-system" Pod="calico-kube-controllers-6dbf4f54f5-flgh7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6dbf4f54f5--flgh7-eth0" Mar 4 01:03:54.727295 containerd[1481]: 2026-03-04 01:03:54.618 [INFO][4641] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2af97af437d ContainerID="5c51a434e0c14b8b73a1cb97d58470255d891cf4da2baba50416ff8fa3e8496d" Namespace="calico-system" Pod="calico-kube-controllers-6dbf4f54f5-flgh7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6dbf4f54f5--flgh7-eth0" Mar 4 01:03:54.727295 containerd[1481]: 2026-03-04 01:03:54.639 [INFO][4641] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5c51a434e0c14b8b73a1cb97d58470255d891cf4da2baba50416ff8fa3e8496d" Namespace="calico-system" Pod="calico-kube-controllers-6dbf4f54f5-flgh7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6dbf4f54f5--flgh7-eth0" Mar 4 01:03:54.727295 containerd[1481]: 2026-03-04 01:03:54.640 [INFO][4641] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5c51a434e0c14b8b73a1cb97d58470255d891cf4da2baba50416ff8fa3e8496d" Namespace="calico-system" Pod="calico-kube-controllers-6dbf4f54f5-flgh7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6dbf4f54f5--flgh7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6dbf4f54f5--flgh7-eth0", GenerateName:"calico-kube-controllers-6dbf4f54f5-", Namespace:"calico-system", SelfLink:"", UID:"28010a71-1727-4efe-b343-de6e69fbd281", ResourceVersion:"1179", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 2, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6dbf4f54f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5c51a434e0c14b8b73a1cb97d58470255d891cf4da2baba50416ff8fa3e8496d", Pod:"calico-kube-controllers-6dbf4f54f5-flgh7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2af97af437d", MAC:"56:92:b8:39:82:80", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:03:54.727295 containerd[1481]: 2026-03-04 01:03:54.695 [INFO][4641] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5c51a434e0c14b8b73a1cb97d58470255d891cf4da2baba50416ff8fa3e8496d" Namespace="calico-system" Pod="calico-kube-controllers-6dbf4f54f5-flgh7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6dbf4f54f5--flgh7-eth0" Mar 4 01:03:54.843573 systemd-networkd[1397]: calif0cc479e32f: Link UP Mar 4 01:03:54.844531 systemd-networkd[1397]: calif0cc479e32f: Gained carrier Mar 4 01:03:54.866730 containerd[1481]: time="2026-03-04T01:03:54.866343577Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:03:54.873068 containerd[1481]: time="2026-03-04T01:03:54.866592983Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:03:54.874547 containerd[1481]: time="2026-03-04T01:03:54.873340860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:03:54.874547 containerd[1481]: time="2026-03-04T01:03:54.874463288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:03:54.934573 containerd[1481]: 2026-03-04 01:03:54.325 [INFO][4654] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--rs6f4-eth0 csi-node-driver- calico-system 24812854-f0ac-4651-986c-4d61a0df5440 1181 0 2026-03-04 01:02:51 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-rs6f4 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif0cc479e32f [] [] }} ContainerID="58cf5fa474792a227a4c7a31eae3a9f24b46a5b1bd46e276a8f761ceb3210e3a" Namespace="calico-system" Pod="csi-node-driver-rs6f4" WorkloadEndpoint="localhost-k8s-csi--node--driver--rs6f4-" Mar 4 01:03:54.934573 containerd[1481]: 2026-03-04 01:03:54.325 [INFO][4654] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="58cf5fa474792a227a4c7a31eae3a9f24b46a5b1bd46e276a8f761ceb3210e3a" Namespace="calico-system" Pod="csi-node-driver-rs6f4" WorkloadEndpoint="localhost-k8s-csi--node--driver--rs6f4-eth0" Mar 4 01:03:54.934573 containerd[1481]: 2026-03-04 01:03:54.484 [INFO][4676] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="58cf5fa474792a227a4c7a31eae3a9f24b46a5b1bd46e276a8f761ceb3210e3a" HandleID="k8s-pod-network.58cf5fa474792a227a4c7a31eae3a9f24b46a5b1bd46e276a8f761ceb3210e3a" Workload="localhost-k8s-csi--node--driver--rs6f4-eth0" Mar 4 01:03:54.934573 containerd[1481]: 2026-03-04 01:03:54.537 [INFO][4676] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="58cf5fa474792a227a4c7a31eae3a9f24b46a5b1bd46e276a8f761ceb3210e3a" HandleID="k8s-pod-network.58cf5fa474792a227a4c7a31eae3a9f24b46a5b1bd46e276a8f761ceb3210e3a" Workload="localhost-k8s-csi--node--driver--rs6f4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000582290), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-rs6f4", "timestamp":"2026-03-04 01:03:54.484144282 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000527080)} Mar 4 01:03:54.934573 containerd[1481]: 2026-03-04 01:03:54.538 [INFO][4676] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:03:54.934573 containerd[1481]: 2026-03-04 01:03:54.609 [INFO][4676] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:03:54.934573 containerd[1481]: 2026-03-04 01:03:54.613 [INFO][4676] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 4 01:03:54.934573 containerd[1481]: 2026-03-04 01:03:54.629 [INFO][4676] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.58cf5fa474792a227a4c7a31eae3a9f24b46a5b1bd46e276a8f761ceb3210e3a" host="localhost" Mar 4 01:03:54.934573 containerd[1481]: 2026-03-04 01:03:54.663 [INFO][4676] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 4 01:03:54.934573 containerd[1481]: 2026-03-04 01:03:54.686 [INFO][4676] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 4 01:03:54.934573 containerd[1481]: 2026-03-04 01:03:54.698 [INFO][4676] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 4 01:03:54.934573 containerd[1481]: 2026-03-04 01:03:54.710 [INFO][4676] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 4 01:03:54.934573 containerd[1481]: 2026-03-04 01:03:54.710 [INFO][4676] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.58cf5fa474792a227a4c7a31eae3a9f24b46a5b1bd46e276a8f761ceb3210e3a" host="localhost" Mar 4 01:03:54.934573 containerd[1481]: 2026-03-04 01:03:54.744 [INFO][4676] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.58cf5fa474792a227a4c7a31eae3a9f24b46a5b1bd46e276a8f761ceb3210e3a Mar 4 01:03:54.934573 containerd[1481]: 2026-03-04 01:03:54.771 [INFO][4676] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.58cf5fa474792a227a4c7a31eae3a9f24b46a5b1bd46e276a8f761ceb3210e3a" host="localhost" Mar 4 01:03:54.934573 containerd[1481]: 2026-03-04 01:03:54.800 [INFO][4676] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.58cf5fa474792a227a4c7a31eae3a9f24b46a5b1bd46e276a8f761ceb3210e3a" host="localhost" Mar 4 01:03:54.934573 containerd[1481]: 2026-03-04 01:03:54.802 [INFO][4676] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.58cf5fa474792a227a4c7a31eae3a9f24b46a5b1bd46e276a8f761ceb3210e3a" host="localhost" Mar 4 01:03:54.934573 containerd[1481]: 2026-03-04 01:03:54.803 [INFO][4676] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:03:54.934573 containerd[1481]: 2026-03-04 01:03:54.805 [INFO][4676] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="58cf5fa474792a227a4c7a31eae3a9f24b46a5b1bd46e276a8f761ceb3210e3a" HandleID="k8s-pod-network.58cf5fa474792a227a4c7a31eae3a9f24b46a5b1bd46e276a8f761ceb3210e3a" Workload="localhost-k8s-csi--node--driver--rs6f4-eth0" Mar 4 01:03:54.936155 containerd[1481]: 2026-03-04 01:03:54.810 [INFO][4654] cni-plugin/k8s.go 418: Populated endpoint ContainerID="58cf5fa474792a227a4c7a31eae3a9f24b46a5b1bd46e276a8f761ceb3210e3a" Namespace="calico-system" Pod="csi-node-driver-rs6f4" WorkloadEndpoint="localhost-k8s-csi--node--driver--rs6f4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rs6f4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"24812854-f0ac-4651-986c-4d61a0df5440", ResourceVersion:"1181", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 2, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-rs6f4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif0cc479e32f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:03:54.936155 containerd[1481]: 2026-03-04 01:03:54.811 [INFO][4654] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="58cf5fa474792a227a4c7a31eae3a9f24b46a5b1bd46e276a8f761ceb3210e3a" Namespace="calico-system" Pod="csi-node-driver-rs6f4" WorkloadEndpoint="localhost-k8s-csi--node--driver--rs6f4-eth0" Mar 4 01:03:54.936155 containerd[1481]: 2026-03-04 01:03:54.811 [INFO][4654] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif0cc479e32f ContainerID="58cf5fa474792a227a4c7a31eae3a9f24b46a5b1bd46e276a8f761ceb3210e3a" Namespace="calico-system" Pod="csi-node-driver-rs6f4" WorkloadEndpoint="localhost-k8s-csi--node--driver--rs6f4-eth0" Mar 4 01:03:54.936155 containerd[1481]: 2026-03-04 01:03:54.847 [INFO][4654] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="58cf5fa474792a227a4c7a31eae3a9f24b46a5b1bd46e276a8f761ceb3210e3a" Namespace="calico-system" Pod="csi-node-driver-rs6f4" WorkloadEndpoint="localhost-k8s-csi--node--driver--rs6f4-eth0" Mar 4 01:03:54.936155 containerd[1481]: 2026-03-04 01:03:54.848 [INFO][4654] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="58cf5fa474792a227a4c7a31eae3a9f24b46a5b1bd46e276a8f761ceb3210e3a" Namespace="calico-system" Pod="csi-node-driver-rs6f4" WorkloadEndpoint="localhost-k8s-csi--node--driver--rs6f4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rs6f4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"24812854-f0ac-4651-986c-4d61a0df5440", ResourceVersion:"1181", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 2, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"58cf5fa474792a227a4c7a31eae3a9f24b46a5b1bd46e276a8f761ceb3210e3a", Pod:"csi-node-driver-rs6f4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif0cc479e32f", MAC:"c6:69:91:58:74:c0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:03:54.936155 containerd[1481]: 2026-03-04 01:03:54.912 [INFO][4654] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="58cf5fa474792a227a4c7a31eae3a9f24b46a5b1bd46e276a8f761ceb3210e3a" Namespace="calico-system" Pod="csi-node-driver-rs6f4" WorkloadEndpoint="localhost-k8s-csi--node--driver--rs6f4-eth0" Mar 4 01:03:54.961692 systemd[1]: Started cri-containerd-5c51a434e0c14b8b73a1cb97d58470255d891cf4da2baba50416ff8fa3e8496d.scope - libcontainer container 5c51a434e0c14b8b73a1cb97d58470255d891cf4da2baba50416ff8fa3e8496d. Mar 4 01:03:55.002686 containerd[1481]: time="2026-03-04T01:03:55.001570891Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:03:55.002686 containerd[1481]: time="2026-03-04T01:03:55.001896150Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:03:55.002686 containerd[1481]: time="2026-03-04T01:03:55.001940252Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:03:55.003221 containerd[1481]: time="2026-03-04T01:03:55.002725207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:03:55.027621 containerd[1481]: 2026-03-04 01:03:54.886 [INFO][4698] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4" Mar 4 01:03:55.027621 containerd[1481]: 2026-03-04 01:03:54.887 [INFO][4698] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4" iface="eth0" netns="/var/run/netns/cni-99b0e717-caeb-25a3-4dc5-af4b02085ff9" Mar 4 01:03:55.027621 containerd[1481]: 2026-03-04 01:03:54.888 [INFO][4698] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4" iface="eth0" netns="/var/run/netns/cni-99b0e717-caeb-25a3-4dc5-af4b02085ff9" Mar 4 01:03:55.027621 containerd[1481]: 2026-03-04 01:03:54.889 [INFO][4698] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4" iface="eth0" netns="/var/run/netns/cni-99b0e717-caeb-25a3-4dc5-af4b02085ff9" Mar 4 01:03:55.027621 containerd[1481]: 2026-03-04 01:03:54.889 [INFO][4698] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4" Mar 4 01:03:55.027621 containerd[1481]: 2026-03-04 01:03:54.889 [INFO][4698] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4" Mar 4 01:03:55.027621 containerd[1481]: 2026-03-04 01:03:54.973 [INFO][4741] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4" HandleID="k8s-pod-network.793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4" Workload="localhost-k8s-coredns--66bc5c9577--rr9gl-eth0" Mar 4 01:03:55.027621 containerd[1481]: 2026-03-04 01:03:54.973 [INFO][4741] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:03:55.027621 containerd[1481]: 2026-03-04 01:03:54.973 [INFO][4741] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:03:55.027621 containerd[1481]: 2026-03-04 01:03:54.991 [WARNING][4741] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4" HandleID="k8s-pod-network.793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4" Workload="localhost-k8s-coredns--66bc5c9577--rr9gl-eth0" Mar 4 01:03:55.027621 containerd[1481]: 2026-03-04 01:03:54.991 [INFO][4741] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4" HandleID="k8s-pod-network.793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4" Workload="localhost-k8s-coredns--66bc5c9577--rr9gl-eth0" Mar 4 01:03:55.027621 containerd[1481]: 2026-03-04 01:03:54.998 [INFO][4741] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:03:55.027621 containerd[1481]: 2026-03-04 01:03:55.003 [INFO][4698] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4" Mar 4 01:03:55.027621 containerd[1481]: time="2026-03-04T01:03:55.027570297Z" level=info msg="TearDown network for sandbox \"793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4\" successfully" Mar 4 01:03:55.027621 containerd[1481]: time="2026-03-04T01:03:55.027613066Z" level=info msg="StopPodSandbox for \"793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4\" returns successfully" Mar 4 01:03:55.036518 systemd[1]: run-netns-cni\x2d99b0e717\x2dcaeb\x2d25a3\x2d4dc5\x2daf4b02085ff9.mount: Deactivated successfully. Mar 4 01:03:55.038925 kubelet[2579]: E0304 01:03:55.038836 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:55.041421 containerd[1481]: time="2026-03-04T01:03:55.041202216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rr9gl,Uid:ba3a3117-2ed7-420f-9281-01467babd9c7,Namespace:kube-system,Attempt:1,}" Mar 4 01:03:55.056911 systemd[1]: run-containerd-runc-k8s.io-58cf5fa474792a227a4c7a31eae3a9f24b46a5b1bd46e276a8f761ceb3210e3a-runc.wVHO3N.mount: Deactivated successfully. Mar 4 01:03:55.069854 systemd[1]: Started cri-containerd-58cf5fa474792a227a4c7a31eae3a9f24b46a5b1bd46e276a8f761ceb3210e3a.scope - libcontainer container 58cf5fa474792a227a4c7a31eae3a9f24b46a5b1bd46e276a8f761ceb3210e3a. Mar 4 01:03:55.077319 systemd-resolved[1339]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 4 01:03:55.105003 systemd-resolved[1339]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 4 01:03:55.149817 containerd[1481]: time="2026-03-04T01:03:55.149695823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rs6f4,Uid:24812854-f0ac-4651-986c-4d61a0df5440,Namespace:calico-system,Attempt:1,} returns sandbox id \"58cf5fa474792a227a4c7a31eae3a9f24b46a5b1bd46e276a8f761ceb3210e3a\"" Mar 4 01:03:55.194757 containerd[1481]: time="2026-03-04T01:03:55.193849812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dbf4f54f5-flgh7,Uid:28010a71-1727-4efe-b343-de6e69fbd281,Namespace:calico-system,Attempt:1,} returns sandbox id \"5c51a434e0c14b8b73a1cb97d58470255d891cf4da2baba50416ff8fa3e8496d\"" Mar 4 01:03:55.206566 systemd-networkd[1397]: cali80bc3f7dafa: Gained IPv6LL Mar 4 01:03:55.454244 systemd-networkd[1397]: cali57bb53015a6: Link UP Mar 4 01:03:55.456038 systemd-networkd[1397]: cali57bb53015a6: Gained carrier Mar 4 01:03:55.517588 containerd[1481]: 2026-03-04 01:03:55.223 [INFO][4807] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--rr9gl-eth0 coredns-66bc5c9577- kube-system ba3a3117-2ed7-420f-9281-01467babd9c7 1194 0 2026-03-04 01:02:18 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-rr9gl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali57bb53015a6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="31401792a14f9c016f73a2ba6c595de28b5fec0ee143ace48883fb344915a269" Namespace="kube-system" Pod="coredns-66bc5c9577-rr9gl" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--rr9gl-" Mar 4 01:03:55.517588 containerd[1481]: 2026-03-04 01:03:55.223 [INFO][4807] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="31401792a14f9c016f73a2ba6c595de28b5fec0ee143ace48883fb344915a269" Namespace="kube-system" Pod="coredns-66bc5c9577-rr9gl" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--rr9gl-eth0" Mar 4 01:03:55.517588 containerd[1481]: 2026-03-04 01:03:55.305 [INFO][4852] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="31401792a14f9c016f73a2ba6c595de28b5fec0ee143ace48883fb344915a269" HandleID="k8s-pod-network.31401792a14f9c016f73a2ba6c595de28b5fec0ee143ace48883fb344915a269" Workload="localhost-k8s-coredns--66bc5c9577--rr9gl-eth0" Mar 4 01:03:55.517588 containerd[1481]: 2026-03-04 01:03:55.319 [INFO][4852] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="31401792a14f9c016f73a2ba6c595de28b5fec0ee143ace48883fb344915a269" HandleID="k8s-pod-network.31401792a14f9c016f73a2ba6c595de28b5fec0ee143ace48883fb344915a269" Workload="localhost-k8s-coredns--66bc5c9577--rr9gl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00049d530), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-rr9gl", "timestamp":"2026-03-04 01:03:55.305268036 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000113080)} Mar 4 01:03:55.517588 containerd[1481]: 2026-03-04 01:03:55.320 [INFO][4852] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:03:55.517588 containerd[1481]: 2026-03-04 01:03:55.321 [INFO][4852] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:03:55.517588 containerd[1481]: 2026-03-04 01:03:55.322 [INFO][4852] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 4 01:03:55.517588 containerd[1481]: 2026-03-04 01:03:55.329 [INFO][4852] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.31401792a14f9c016f73a2ba6c595de28b5fec0ee143ace48883fb344915a269" host="localhost" Mar 4 01:03:55.517588 containerd[1481]: 2026-03-04 01:03:55.344 [INFO][4852] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 4 01:03:55.517588 containerd[1481]: 2026-03-04 01:03:55.356 [INFO][4852] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 4 01:03:55.517588 containerd[1481]: 2026-03-04 01:03:55.360 [INFO][4852] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 4 01:03:55.517588 containerd[1481]: 2026-03-04 01:03:55.366 [INFO][4852] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 4 01:03:55.517588 containerd[1481]: 2026-03-04 01:03:55.366 [INFO][4852] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.31401792a14f9c016f73a2ba6c595de28b5fec0ee143ace48883fb344915a269" host="localhost" Mar 4 01:03:55.517588 containerd[1481]: 2026-03-04 01:03:55.371 [INFO][4852] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.31401792a14f9c016f73a2ba6c595de28b5fec0ee143ace48883fb344915a269 Mar 4 01:03:55.517588 containerd[1481]: 2026-03-04 01:03:55.384 [INFO][4852] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.31401792a14f9c016f73a2ba6c595de28b5fec0ee143ace48883fb344915a269" host="localhost" Mar 4 01:03:55.517588 containerd[1481]: 2026-03-04 01:03:55.427 [INFO][4852] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.31401792a14f9c016f73a2ba6c595de28b5fec0ee143ace48883fb344915a269" host="localhost" Mar 4 01:03:55.517588 containerd[1481]: 2026-03-04 01:03:55.428 [INFO][4852] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.31401792a14f9c016f73a2ba6c595de28b5fec0ee143ace48883fb344915a269" host="localhost" Mar 4 01:03:55.517588 containerd[1481]: 2026-03-04 01:03:55.428 [INFO][4852] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:03:55.517588 containerd[1481]: 2026-03-04 01:03:55.428 [INFO][4852] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="31401792a14f9c016f73a2ba6c595de28b5fec0ee143ace48883fb344915a269" HandleID="k8s-pod-network.31401792a14f9c016f73a2ba6c595de28b5fec0ee143ace48883fb344915a269" Workload="localhost-k8s-coredns--66bc5c9577--rr9gl-eth0" Mar 4 01:03:55.518854 containerd[1481]: 2026-03-04 01:03:55.438 [INFO][4807] cni-plugin/k8s.go 418: Populated endpoint ContainerID="31401792a14f9c016f73a2ba6c595de28b5fec0ee143ace48883fb344915a269" Namespace="kube-system" Pod="coredns-66bc5c9577-rr9gl" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--rr9gl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--rr9gl-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"ba3a3117-2ed7-420f-9281-01467babd9c7", ResourceVersion:"1194", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 2, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-rr9gl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali57bb53015a6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:03:55.518854 containerd[1481]: 2026-03-04 01:03:55.439 [INFO][4807] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="31401792a14f9c016f73a2ba6c595de28b5fec0ee143ace48883fb344915a269" Namespace="kube-system" Pod="coredns-66bc5c9577-rr9gl" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--rr9gl-eth0" Mar 4 01:03:55.518854 containerd[1481]: 2026-03-04 01:03:55.439 [INFO][4807] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali57bb53015a6 ContainerID="31401792a14f9c016f73a2ba6c595de28b5fec0ee143ace48883fb344915a269" Namespace="kube-system" Pod="coredns-66bc5c9577-rr9gl" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--rr9gl-eth0" Mar 4 01:03:55.518854 containerd[1481]: 2026-03-04 01:03:55.459 [INFO][4807] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="31401792a14f9c016f73a2ba6c595de28b5fec0ee143ace48883fb344915a269" Namespace="kube-system" Pod="coredns-66bc5c9577-rr9gl" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--rr9gl-eth0" Mar 4 01:03:55.518854 containerd[1481]: 2026-03-04 01:03:55.461 [INFO][4807] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="31401792a14f9c016f73a2ba6c595de28b5fec0ee143ace48883fb344915a269" Namespace="kube-system" Pod="coredns-66bc5c9577-rr9gl" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--rr9gl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--rr9gl-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"ba3a3117-2ed7-420f-9281-01467babd9c7", ResourceVersion:"1194", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 2, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"31401792a14f9c016f73a2ba6c595de28b5fec0ee143ace48883fb344915a269", Pod:"coredns-66bc5c9577-rr9gl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali57bb53015a6", MAC:"f6:a9:44:0e:2d:7f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:03:55.518854 containerd[1481]: 2026-03-04 01:03:55.506 [INFO][4807] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="31401792a14f9c016f73a2ba6c595de28b5fec0ee143ace48883fb344915a269" Namespace="kube-system" Pod="coredns-66bc5c9577-rr9gl" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--rr9gl-eth0" Mar 4 01:03:55.569059 containerd[1481]: time="2026-03-04T01:03:55.568147793Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:03:55.569059 containerd[1481]: time="2026-03-04T01:03:55.568320803Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:03:55.569059 containerd[1481]: time="2026-03-04T01:03:55.568590348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:03:55.569059 containerd[1481]: time="2026-03-04T01:03:55.568919565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:03:55.624842 systemd[1]: Started cri-containerd-31401792a14f9c016f73a2ba6c595de28b5fec0ee143ace48883fb344915a269.scope - libcontainer container 31401792a14f9c016f73a2ba6c595de28b5fec0ee143ace48883fb344915a269. Mar 4 01:03:55.627486 containerd[1481]: time="2026-03-04T01:03:55.627245884Z" level=info msg="StopPodSandbox for \"98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb\"" Mar 4 01:03:55.628048 containerd[1481]: time="2026-03-04T01:03:55.627965136Z" level=info msg="StopPodSandbox for \"549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22\"" Mar 4 01:03:55.637908 containerd[1481]: time="2026-03-04T01:03:55.637711159Z" level=info msg="StopPodSandbox for \"01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0\"" Mar 4 01:03:55.727114 systemd-networkd[1397]: cali2af97af437d: Gained IPv6LL Mar 4 01:03:55.747747 systemd-resolved[1339]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 4 01:03:55.957313 containerd[1481]: time="2026-03-04T01:03:55.956595466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rr9gl,Uid:ba3a3117-2ed7-420f-9281-01467babd9c7,Namespace:kube-system,Attempt:1,} returns sandbox id \"31401792a14f9c016f73a2ba6c595de28b5fec0ee143ace48883fb344915a269\"" Mar 4 01:03:55.973455 kubelet[2579]: E0304 01:03:55.970117 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:55.999533 containerd[1481]: time="2026-03-04T01:03:55.997896844Z" level=info msg="CreateContainer within sandbox \"31401792a14f9c016f73a2ba6c595de28b5fec0ee143ace48883fb344915a269\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 4 01:03:56.070504 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3105281089.mount: Deactivated successfully. Mar 4 01:03:56.099784 containerd[1481]: 2026-03-04 01:03:55.948 [INFO][4950] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb" Mar 4 01:03:56.099784 containerd[1481]: 2026-03-04 01:03:55.948 [INFO][4950] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb" iface="eth0" netns="/var/run/netns/cni-3b559db2-95f1-8e4c-d8a4-381b5b4a985e" Mar 4 01:03:56.099784 containerd[1481]: 2026-03-04 01:03:55.954 [INFO][4950] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb" iface="eth0" netns="/var/run/netns/cni-3b559db2-95f1-8e4c-d8a4-381b5b4a985e" Mar 4 01:03:56.099784 containerd[1481]: 2026-03-04 01:03:55.956 [INFO][4950] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb" iface="eth0" netns="/var/run/netns/cni-3b559db2-95f1-8e4c-d8a4-381b5b4a985e" Mar 4 01:03:56.099784 containerd[1481]: 2026-03-04 01:03:55.957 [INFO][4950] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb" Mar 4 01:03:56.099784 containerd[1481]: 2026-03-04 01:03:55.959 [INFO][4950] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb" Mar 4 01:03:56.099784 containerd[1481]: 2026-03-04 01:03:56.026 [INFO][4981] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb" HandleID="k8s-pod-network.98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb" Workload="localhost-k8s-calico--apiserver--66b64679fb--6fgtf-eth0" Mar 4 01:03:56.099784 containerd[1481]: 2026-03-04 01:03:56.032 [INFO][4981] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:03:56.099784 containerd[1481]: 2026-03-04 01:03:56.032 [INFO][4981] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:03:56.099784 containerd[1481]: 2026-03-04 01:03:56.065 [WARNING][4981] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb" HandleID="k8s-pod-network.98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb" Workload="localhost-k8s-calico--apiserver--66b64679fb--6fgtf-eth0" Mar 4 01:03:56.099784 containerd[1481]: 2026-03-04 01:03:56.065 [INFO][4981] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb" HandleID="k8s-pod-network.98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb" Workload="localhost-k8s-calico--apiserver--66b64679fb--6fgtf-eth0" Mar 4 01:03:56.099784 containerd[1481]: 2026-03-04 01:03:56.071 [INFO][4981] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:03:56.099784 containerd[1481]: 2026-03-04 01:03:56.093 [INFO][4950] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb" Mar 4 01:03:56.100703 containerd[1481]: time="2026-03-04T01:03:56.100494406Z" level=info msg="TearDown network for sandbox \"98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb\" successfully" Mar 4 01:03:56.100703 containerd[1481]: time="2026-03-04T01:03:56.100529281Z" level=info msg="StopPodSandbox for \"98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb\" returns successfully" Mar 4 01:03:56.102708 containerd[1481]: time="2026-03-04T01:03:56.102668862Z" level=info msg="CreateContainer within sandbox \"31401792a14f9c016f73a2ba6c595de28b5fec0ee143ace48883fb344915a269\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f77d0181c3bc6f35fe9d5af110b63496e0137fca2e1f4a0e29ac0e71f7a232b2\"" Mar 4 01:03:56.104666 systemd[1]: run-netns-cni\x2d3b559db2\x2d95f1\x2d8e4c\x2dd8a4\x2d381b5b4a985e.mount: Deactivated successfully. Mar 4 01:03:56.111974 containerd[1481]: time="2026-03-04T01:03:56.111920407Z" level=info msg="StartContainer for \"f77d0181c3bc6f35fe9d5af110b63496e0137fca2e1f4a0e29ac0e71f7a232b2\"" Mar 4 01:03:56.119607 containerd[1481]: time="2026-03-04T01:03:56.119490180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66b64679fb-6fgtf,Uid:9c0b93c3-34c4-4c8f-bfc1-54f9448d999f,Namespace:calico-system,Attempt:1,}" Mar 4 01:03:56.172495 containerd[1481]: 2026-03-04 01:03:55.967 [INFO][4946] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22" Mar 4 01:03:56.172495 containerd[1481]: 2026-03-04 01:03:55.967 [INFO][4946] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22" iface="eth0" netns="/var/run/netns/cni-6370b8ef-5d51-7fad-c637-b8edfdded26e" Mar 4 01:03:56.172495 containerd[1481]: 2026-03-04 01:03:55.974 [INFO][4946] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22" iface="eth0" netns="/var/run/netns/cni-6370b8ef-5d51-7fad-c637-b8edfdded26e" Mar 4 01:03:56.172495 containerd[1481]: 2026-03-04 01:03:55.976 [INFO][4946] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22" iface="eth0" netns="/var/run/netns/cni-6370b8ef-5d51-7fad-c637-b8edfdded26e" Mar 4 01:03:56.172495 containerd[1481]: 2026-03-04 01:03:55.979 [INFO][4946] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22" Mar 4 01:03:56.172495 containerd[1481]: 2026-03-04 01:03:55.979 [INFO][4946] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22" Mar 4 01:03:56.172495 containerd[1481]: 2026-03-04 01:03:56.119 [INFO][4985] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22" HandleID="k8s-pod-network.549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22" Workload="localhost-k8s-coredns--66bc5c9577--svjm8-eth0" Mar 4 01:03:56.172495 containerd[1481]: 2026-03-04 01:03:56.120 [INFO][4985] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:03:56.172495 containerd[1481]: 2026-03-04 01:03:56.121 [INFO][4985] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:03:56.172495 containerd[1481]: 2026-03-04 01:03:56.136 [WARNING][4985] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22" HandleID="k8s-pod-network.549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22" Workload="localhost-k8s-coredns--66bc5c9577--svjm8-eth0" Mar 4 01:03:56.172495 containerd[1481]: 2026-03-04 01:03:56.137 [INFO][4985] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22" HandleID="k8s-pod-network.549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22" Workload="localhost-k8s-coredns--66bc5c9577--svjm8-eth0" Mar 4 01:03:56.172495 containerd[1481]: 2026-03-04 01:03:56.143 [INFO][4985] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:03:56.172495 containerd[1481]: 2026-03-04 01:03:56.156 [INFO][4946] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22" Mar 4 01:03:56.176090 containerd[1481]: time="2026-03-04T01:03:56.174994060Z" level=info msg="TearDown network for sandbox \"549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22\" successfully" Mar 4 01:03:56.176090 containerd[1481]: time="2026-03-04T01:03:56.175036408Z" level=info msg="StopPodSandbox for \"549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22\" returns successfully" Mar 4 01:03:56.181316 kubelet[2579]: E0304 01:03:56.181031 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:56.184258 containerd[1481]: time="2026-03-04T01:03:56.184151623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-svjm8,Uid:0199c151-24ad-4cf2-ae91-b7e9b350322f,Namespace:kube-system,Attempt:1,}" Mar 4 01:03:56.215907 containerd[1481]: 2026-03-04 01:03:56.039 [INFO][4937] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0" Mar 4 01:03:56.215907 containerd[1481]: 2026-03-04 01:03:56.046 [INFO][4937] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0" iface="eth0" netns="/var/run/netns/cni-d12ef8cc-e8a5-a477-c3c3-5baf72373cd9" Mar 4 01:03:56.215907 containerd[1481]: 2026-03-04 01:03:56.048 [INFO][4937] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0" iface="eth0" netns="/var/run/netns/cni-d12ef8cc-e8a5-a477-c3c3-5baf72373cd9" Mar 4 01:03:56.215907 containerd[1481]: 2026-03-04 01:03:56.052 [INFO][4937] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0" iface="eth0" netns="/var/run/netns/cni-d12ef8cc-e8a5-a477-c3c3-5baf72373cd9" Mar 4 01:03:56.215907 containerd[1481]: 2026-03-04 01:03:56.054 [INFO][4937] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0" Mar 4 01:03:56.215907 containerd[1481]: 2026-03-04 01:03:56.058 [INFO][4937] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0" Mar 4 01:03:56.215907 containerd[1481]: 2026-03-04 01:03:56.171 [INFO][5003] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0" HandleID="k8s-pod-network.01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0" Workload="localhost-k8s-goldmane--cccfbd5cf--whcgp-eth0" Mar 4 01:03:56.215907 containerd[1481]: 2026-03-04 01:03:56.172 [INFO][5003] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:03:56.215907 containerd[1481]: 2026-03-04 01:03:56.172 [INFO][5003] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:03:56.215907 containerd[1481]: 2026-03-04 01:03:56.192 [WARNING][5003] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0" HandleID="k8s-pod-network.01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0" Workload="localhost-k8s-goldmane--cccfbd5cf--whcgp-eth0" Mar 4 01:03:56.215907 containerd[1481]: 2026-03-04 01:03:56.192 [INFO][5003] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0" HandleID="k8s-pod-network.01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0" Workload="localhost-k8s-goldmane--cccfbd5cf--whcgp-eth0" Mar 4 01:03:56.215907 containerd[1481]: 2026-03-04 01:03:56.201 [INFO][5003] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:03:56.215907 containerd[1481]: 2026-03-04 01:03:56.210 [INFO][4937] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0" Mar 4 01:03:56.231044 containerd[1481]: time="2026-03-04T01:03:56.228933214Z" level=info msg="TearDown network for sandbox \"01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0\" successfully" Mar 4 01:03:56.231044 containerd[1481]: time="2026-03-04T01:03:56.228986383Z" level=info msg="StopPodSandbox for \"01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0\" returns successfully" Mar 4 01:03:56.235824 systemd[1]: Started cri-containerd-f77d0181c3bc6f35fe9d5af110b63496e0137fca2e1f4a0e29ac0e71f7a232b2.scope - libcontainer container f77d0181c3bc6f35fe9d5af110b63496e0137fca2e1f4a0e29ac0e71f7a232b2. Mar 4 01:03:56.238937 containerd[1481]: time="2026-03-04T01:03:56.238866768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-whcgp,Uid:883fc4b1-6269-44ab-9fb7-38da3bb836eb,Namespace:calico-system,Attempt:1,}" Mar 4 01:03:56.358870 systemd-networkd[1397]: calif0cc479e32f: Gained IPv6LL Mar 4 01:03:56.383929 containerd[1481]: time="2026-03-04T01:03:56.383097965Z" level=info msg="StartContainer for \"f77d0181c3bc6f35fe9d5af110b63496e0137fca2e1f4a0e29ac0e71f7a232b2\" returns successfully" Mar 4 01:03:56.550906 systemd-networkd[1397]: cali57bb53015a6: Gained IPv6LL Mar 4 01:03:56.651311 systemd-networkd[1397]: cali4c64575a74f: Link UP Mar 4 01:03:56.652800 systemd-networkd[1397]: cali4c64575a74f: Gained carrier Mar 4 01:03:56.712137 containerd[1481]: 2026-03-04 01:03:56.259 [INFO][5019] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--66b64679fb--6fgtf-eth0 calico-apiserver-66b64679fb- calico-system 9c0b93c3-34c4-4c8f-bfc1-54f9448d999f 1209 0 2026-03-04 01:02:50 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:66b64679fb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-66b64679fb-6fgtf eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali4c64575a74f [] [] }} ContainerID="c52ba9bdffc8e51b15c764b6d03439696a9e4f44d02ef7154b012feb8c72e1d6" Namespace="calico-system" Pod="calico-apiserver-66b64679fb-6fgtf" WorkloadEndpoint="localhost-k8s-calico--apiserver--66b64679fb--6fgtf-" Mar 4 01:03:56.712137 containerd[1481]: 2026-03-04 01:03:56.259 [INFO][5019] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c52ba9bdffc8e51b15c764b6d03439696a9e4f44d02ef7154b012feb8c72e1d6" Namespace="calico-system" Pod="calico-apiserver-66b64679fb-6fgtf" WorkloadEndpoint="localhost-k8s-calico--apiserver--66b64679fb--6fgtf-eth0" Mar 4 01:03:56.712137 containerd[1481]: 2026-03-04 01:03:56.417 [INFO][5065] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c52ba9bdffc8e51b15c764b6d03439696a9e4f44d02ef7154b012feb8c72e1d6" HandleID="k8s-pod-network.c52ba9bdffc8e51b15c764b6d03439696a9e4f44d02ef7154b012feb8c72e1d6" Workload="localhost-k8s-calico--apiserver--66b64679fb--6fgtf-eth0" Mar 4 01:03:56.712137 containerd[1481]: 2026-03-04 01:03:56.441 [INFO][5065] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c52ba9bdffc8e51b15c764b6d03439696a9e4f44d02ef7154b012feb8c72e1d6" HandleID="k8s-pod-network.c52ba9bdffc8e51b15c764b6d03439696a9e4f44d02ef7154b012feb8c72e1d6" Workload="localhost-k8s-calico--apiserver--66b64679fb--6fgtf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004d2c60), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-66b64679fb-6fgtf", "timestamp":"2026-03-04 01:03:56.417993907 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000566dc0)} Mar 4 01:03:56.712137 containerd[1481]: 2026-03-04 01:03:56.441 [INFO][5065] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:03:56.712137 containerd[1481]: 2026-03-04 01:03:56.442 [INFO][5065] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:03:56.712137 containerd[1481]: 2026-03-04 01:03:56.442 [INFO][5065] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 4 01:03:56.712137 containerd[1481]: 2026-03-04 01:03:56.457 [INFO][5065] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c52ba9bdffc8e51b15c764b6d03439696a9e4f44d02ef7154b012feb8c72e1d6" host="localhost" Mar 4 01:03:56.712137 containerd[1481]: 2026-03-04 01:03:56.536 [INFO][5065] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 4 01:03:56.712137 containerd[1481]: 2026-03-04 01:03:56.553 [INFO][5065] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 4 01:03:56.712137 containerd[1481]: 2026-03-04 01:03:56.563 [INFO][5065] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 4 01:03:56.712137 containerd[1481]: 2026-03-04 01:03:56.572 [INFO][5065] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 4 01:03:56.712137 containerd[1481]: 2026-03-04 01:03:56.573 [INFO][5065] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c52ba9bdffc8e51b15c764b6d03439696a9e4f44d02ef7154b012feb8c72e1d6" host="localhost" Mar 4 01:03:56.712137 containerd[1481]: 2026-03-04 01:03:56.580 [INFO][5065] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c52ba9bdffc8e51b15c764b6d03439696a9e4f44d02ef7154b012feb8c72e1d6 Mar 4 01:03:56.712137 containerd[1481]: 2026-03-04 01:03:56.613 [INFO][5065] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c52ba9bdffc8e51b15c764b6d03439696a9e4f44d02ef7154b012feb8c72e1d6" host="localhost" Mar 4 01:03:56.712137 containerd[1481]: 2026-03-04 01:03:56.627 [INFO][5065] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.c52ba9bdffc8e51b15c764b6d03439696a9e4f44d02ef7154b012feb8c72e1d6" host="localhost" Mar 4 01:03:56.712137 containerd[1481]: 2026-03-04 01:03:56.628 [INFO][5065] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.c52ba9bdffc8e51b15c764b6d03439696a9e4f44d02ef7154b012feb8c72e1d6" host="localhost" Mar 4 01:03:56.712137 containerd[1481]: 2026-03-04 01:03:56.629 [INFO][5065] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:03:56.712137 containerd[1481]: 2026-03-04 01:03:56.629 [INFO][5065] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="c52ba9bdffc8e51b15c764b6d03439696a9e4f44d02ef7154b012feb8c72e1d6" HandleID="k8s-pod-network.c52ba9bdffc8e51b15c764b6d03439696a9e4f44d02ef7154b012feb8c72e1d6" Workload="localhost-k8s-calico--apiserver--66b64679fb--6fgtf-eth0" Mar 4 01:03:56.714091 containerd[1481]: 2026-03-04 01:03:56.637 [INFO][5019] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c52ba9bdffc8e51b15c764b6d03439696a9e4f44d02ef7154b012feb8c72e1d6" Namespace="calico-system" Pod="calico-apiserver-66b64679fb-6fgtf" WorkloadEndpoint="localhost-k8s-calico--apiserver--66b64679fb--6fgtf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66b64679fb--6fgtf-eth0", GenerateName:"calico-apiserver-66b64679fb-", Namespace:"calico-system", SelfLink:"", UID:"9c0b93c3-34c4-4c8f-bfc1-54f9448d999f", ResourceVersion:"1209", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 2, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66b64679fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-66b64679fb-6fgtf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali4c64575a74f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:03:56.714091 containerd[1481]: 2026-03-04 01:03:56.637 [INFO][5019] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="c52ba9bdffc8e51b15c764b6d03439696a9e4f44d02ef7154b012feb8c72e1d6" Namespace="calico-system" Pod="calico-apiserver-66b64679fb-6fgtf" WorkloadEndpoint="localhost-k8s-calico--apiserver--66b64679fb--6fgtf-eth0" Mar 4 01:03:56.714091 containerd[1481]: 2026-03-04 01:03:56.637 [INFO][5019] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4c64575a74f ContainerID="c52ba9bdffc8e51b15c764b6d03439696a9e4f44d02ef7154b012feb8c72e1d6" Namespace="calico-system" Pod="calico-apiserver-66b64679fb-6fgtf" WorkloadEndpoint="localhost-k8s-calico--apiserver--66b64679fb--6fgtf-eth0" Mar 4 01:03:56.714091 containerd[1481]: 2026-03-04 01:03:56.658 [INFO][5019] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c52ba9bdffc8e51b15c764b6d03439696a9e4f44d02ef7154b012feb8c72e1d6" Namespace="calico-system" Pod="calico-apiserver-66b64679fb-6fgtf" WorkloadEndpoint="localhost-k8s-calico--apiserver--66b64679fb--6fgtf-eth0" Mar 4 01:03:56.714091 containerd[1481]: 2026-03-04 01:03:56.659 [INFO][5019] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c52ba9bdffc8e51b15c764b6d03439696a9e4f44d02ef7154b012feb8c72e1d6" Namespace="calico-system" Pod="calico-apiserver-66b64679fb-6fgtf" WorkloadEndpoint="localhost-k8s-calico--apiserver--66b64679fb--6fgtf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66b64679fb--6fgtf-eth0", GenerateName:"calico-apiserver-66b64679fb-", Namespace:"calico-system", SelfLink:"", UID:"9c0b93c3-34c4-4c8f-bfc1-54f9448d999f", ResourceVersion:"1209", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 2, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66b64679fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c52ba9bdffc8e51b15c764b6d03439696a9e4f44d02ef7154b012feb8c72e1d6", Pod:"calico-apiserver-66b64679fb-6fgtf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali4c64575a74f", MAC:"72:bc:98:20:07:72", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:03:56.714091 containerd[1481]: 2026-03-04 01:03:56.686 [INFO][5019] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c52ba9bdffc8e51b15c764b6d03439696a9e4f44d02ef7154b012feb8c72e1d6" Namespace="calico-system" Pod="calico-apiserver-66b64679fb-6fgtf" WorkloadEndpoint="localhost-k8s-calico--apiserver--66b64679fb--6fgtf-eth0" Mar 4 01:03:56.760093 systemd-networkd[1397]: cali6e98f0db6aa: Link UP Mar 4 01:03:56.766009 systemd-networkd[1397]: cali6e98f0db6aa: Gained carrier Mar 4 01:03:56.816955 containerd[1481]: 2026-03-04 01:03:56.349 [INFO][5054] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--svjm8-eth0 coredns-66bc5c9577- kube-system 0199c151-24ad-4cf2-ae91-b7e9b350322f 1210 0 2026-03-04 01:02:18 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-svjm8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6e98f0db6aa [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="b04db128a8691221c848b483526635d5775d26559f74d00e007013b89e7510e9" Namespace="kube-system" Pod="coredns-66bc5c9577-svjm8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--svjm8-" Mar 4 01:03:56.816955 containerd[1481]: 2026-03-04 01:03:56.349 [INFO][5054] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b04db128a8691221c848b483526635d5775d26559f74d00e007013b89e7510e9" Namespace="kube-system" Pod="coredns-66bc5c9577-svjm8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--svjm8-eth0" Mar 4 01:03:56.816955 containerd[1481]: 2026-03-04 01:03:56.574 [INFO][5096] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b04db128a8691221c848b483526635d5775d26559f74d00e007013b89e7510e9" HandleID="k8s-pod-network.b04db128a8691221c848b483526635d5775d26559f74d00e007013b89e7510e9" Workload="localhost-k8s-coredns--66bc5c9577--svjm8-eth0" Mar 4 01:03:56.816955 containerd[1481]: 2026-03-04 01:03:56.599 [INFO][5096] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b04db128a8691221c848b483526635d5775d26559f74d00e007013b89e7510e9" HandleID="k8s-pod-network.b04db128a8691221c848b483526635d5775d26559f74d00e007013b89e7510e9" Workload="localhost-k8s-coredns--66bc5c9577--svjm8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003696d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-svjm8", "timestamp":"2026-03-04 01:03:56.574865139 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000c54a0)} Mar 4 01:03:56.816955 containerd[1481]: 2026-03-04 01:03:56.599 [INFO][5096] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:03:56.816955 containerd[1481]: 2026-03-04 01:03:56.630 [INFO][5096] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:03:56.816955 containerd[1481]: 2026-03-04 01:03:56.630 [INFO][5096] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 4 01:03:56.816955 containerd[1481]: 2026-03-04 01:03:56.639 [INFO][5096] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b04db128a8691221c848b483526635d5775d26559f74d00e007013b89e7510e9" host="localhost" Mar 4 01:03:56.816955 containerd[1481]: 2026-03-04 01:03:56.659 [INFO][5096] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 4 01:03:56.816955 containerd[1481]: 2026-03-04 01:03:56.674 [INFO][5096] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 4 01:03:56.816955 containerd[1481]: 2026-03-04 01:03:56.682 [INFO][5096] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 4 01:03:56.816955 containerd[1481]: 2026-03-04 01:03:56.693 [INFO][5096] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 4 01:03:56.816955 containerd[1481]: 2026-03-04 01:03:56.694 [INFO][5096] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b04db128a8691221c848b483526635d5775d26559f74d00e007013b89e7510e9" host="localhost" Mar 4 01:03:56.816955 containerd[1481]: 2026-03-04 01:03:56.702 [INFO][5096] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b04db128a8691221c848b483526635d5775d26559f74d00e007013b89e7510e9 Mar 4 01:03:56.816955 containerd[1481]: 2026-03-04 01:03:56.714 [INFO][5096] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b04db128a8691221c848b483526635d5775d26559f74d00e007013b89e7510e9" host="localhost" Mar 4 01:03:56.816955 containerd[1481]: 2026-03-04 01:03:56.737 [INFO][5096] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.b04db128a8691221c848b483526635d5775d26559f74d00e007013b89e7510e9" host="localhost" Mar 4 01:03:56.816955 containerd[1481]: 2026-03-04 01:03:56.739 [INFO][5096] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.b04db128a8691221c848b483526635d5775d26559f74d00e007013b89e7510e9" host="localhost" Mar 4 01:03:56.816955 containerd[1481]: 2026-03-04 01:03:56.739 [INFO][5096] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:03:56.816955 containerd[1481]: 2026-03-04 01:03:56.740 [INFO][5096] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="b04db128a8691221c848b483526635d5775d26559f74d00e007013b89e7510e9" HandleID="k8s-pod-network.b04db128a8691221c848b483526635d5775d26559f74d00e007013b89e7510e9" Workload="localhost-k8s-coredns--66bc5c9577--svjm8-eth0" Mar 4 01:03:56.818165 containerd[1481]: 2026-03-04 01:03:56.749 [INFO][5054] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b04db128a8691221c848b483526635d5775d26559f74d00e007013b89e7510e9" Namespace="kube-system" Pod="coredns-66bc5c9577-svjm8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--svjm8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--svjm8-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"0199c151-24ad-4cf2-ae91-b7e9b350322f", ResourceVersion:"1210", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 2, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-svjm8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6e98f0db6aa", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:03:56.818165 containerd[1481]: 2026-03-04 01:03:56.749 [INFO][5054] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="b04db128a8691221c848b483526635d5775d26559f74d00e007013b89e7510e9" Namespace="kube-system" Pod="coredns-66bc5c9577-svjm8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--svjm8-eth0" Mar 4 01:03:56.818165 containerd[1481]: 2026-03-04 01:03:56.750 [INFO][5054] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6e98f0db6aa ContainerID="b04db128a8691221c848b483526635d5775d26559f74d00e007013b89e7510e9" Namespace="kube-system" Pod="coredns-66bc5c9577-svjm8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--svjm8-eth0" Mar 4 01:03:56.818165 containerd[1481]: 2026-03-04 01:03:56.770 [INFO][5054] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b04db128a8691221c848b483526635d5775d26559f74d00e007013b89e7510e9" Namespace="kube-system" Pod="coredns-66bc5c9577-svjm8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--svjm8-eth0" Mar 4 01:03:56.818165 containerd[1481]: 2026-03-04 01:03:56.781 [INFO][5054] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b04db128a8691221c848b483526635d5775d26559f74d00e007013b89e7510e9" Namespace="kube-system" Pod="coredns-66bc5c9577-svjm8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--svjm8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--svjm8-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"0199c151-24ad-4cf2-ae91-b7e9b350322f", ResourceVersion:"1210", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 2, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b04db128a8691221c848b483526635d5775d26559f74d00e007013b89e7510e9", Pod:"coredns-66bc5c9577-svjm8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6e98f0db6aa", MAC:"72:f4:0f:e4:ab:03", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:03:56.818165 containerd[1481]: 2026-03-04 01:03:56.812 [INFO][5054] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b04db128a8691221c848b483526635d5775d26559f74d00e007013b89e7510e9" Namespace="kube-system" Pod="coredns-66bc5c9577-svjm8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--svjm8-eth0" Mar 4 01:03:56.842943 containerd[1481]: time="2026-03-04T01:03:56.842292619Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:03:56.842943 containerd[1481]: time="2026-03-04T01:03:56.842523072Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:03:56.842943 containerd[1481]: time="2026-03-04T01:03:56.842572604Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:03:56.849750 containerd[1481]: time="2026-03-04T01:03:56.849545033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:03:56.961727 systemd[1]: run-netns-cni\x2dd12ef8cc\x2de8a5\x2da477\x2dc3c3\x2d5baf72373cd9.mount: Deactivated successfully. Mar 4 01:03:56.961882 systemd[1]: run-netns-cni\x2d6370b8ef\x2d5d51\x2d7fad\x2dc637\x2db8edfdded26e.mount: Deactivated successfully. Mar 4 01:03:56.978146 containerd[1481]: time="2026-03-04T01:03:56.966305281Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:03:56.978146 containerd[1481]: time="2026-03-04T01:03:56.969739645Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:03:56.978146 containerd[1481]: time="2026-03-04T01:03:56.969767556Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:03:56.978146 containerd[1481]: time="2026-03-04T01:03:56.970028988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:03:57.018878 systemd[1]: Started cri-containerd-c52ba9bdffc8e51b15c764b6d03439696a9e4f44d02ef7154b012feb8c72e1d6.scope - libcontainer container c52ba9bdffc8e51b15c764b6d03439696a9e4f44d02ef7154b012feb8c72e1d6. Mar 4 01:03:57.047860 systemd[1]: Started cri-containerd-b04db128a8691221c848b483526635d5775d26559f74d00e007013b89e7510e9.scope - libcontainer container b04db128a8691221c848b483526635d5775d26559f74d00e007013b89e7510e9. Mar 4 01:03:57.074056 systemd-networkd[1397]: cali6414439ad9a: Link UP Mar 4 01:03:57.076491 systemd-networkd[1397]: cali6414439ad9a: Gained carrier Mar 4 01:03:57.093844 systemd-resolved[1339]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 4 01:03:57.096676 systemd-resolved[1339]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 4 01:03:57.131840 containerd[1481]: 2026-03-04 01:03:56.547 [INFO][5070] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--cccfbd5cf--whcgp-eth0 goldmane-cccfbd5cf- calico-system 883fc4b1-6269-44ab-9fb7-38da3bb836eb 1213 0 2026-03-04 01:02:50 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-cccfbd5cf-whcgp eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali6414439ad9a [] [] }} ContainerID="cef8018ce80220dcc841999f685037e2bc80a0ff0cb2887de1c7be826e404538" Namespace="calico-system" Pod="goldmane-cccfbd5cf-whcgp" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--whcgp-" Mar 4 01:03:57.131840 containerd[1481]: 2026-03-04 01:03:56.547 [INFO][5070] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cef8018ce80220dcc841999f685037e2bc80a0ff0cb2887de1c7be826e404538" Namespace="calico-system" Pod="goldmane-cccfbd5cf-whcgp" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--whcgp-eth0" Mar 4 01:03:57.131840 containerd[1481]: 2026-03-04 01:03:56.768 [INFO][5115] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cef8018ce80220dcc841999f685037e2bc80a0ff0cb2887de1c7be826e404538" HandleID="k8s-pod-network.cef8018ce80220dcc841999f685037e2bc80a0ff0cb2887de1c7be826e404538" Workload="localhost-k8s-goldmane--cccfbd5cf--whcgp-eth0" Mar 4 01:03:57.131840 containerd[1481]: 2026-03-04 01:03:56.808 [INFO][5115] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="cef8018ce80220dcc841999f685037e2bc80a0ff0cb2887de1c7be826e404538" HandleID="k8s-pod-network.cef8018ce80220dcc841999f685037e2bc80a0ff0cb2887de1c7be826e404538" Workload="localhost-k8s-goldmane--cccfbd5cf--whcgp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f250), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-cccfbd5cf-whcgp", "timestamp":"2026-03-04 01:03:56.7681854 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000112420)} Mar 4 01:03:57.131840 containerd[1481]: 2026-03-04 01:03:56.808 [INFO][5115] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:03:57.131840 containerd[1481]: 2026-03-04 01:03:56.808 [INFO][5115] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:03:57.131840 containerd[1481]: 2026-03-04 01:03:56.808 [INFO][5115] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 4 01:03:57.131840 containerd[1481]: 2026-03-04 01:03:56.818 [INFO][5115] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.cef8018ce80220dcc841999f685037e2bc80a0ff0cb2887de1c7be826e404538" host="localhost" Mar 4 01:03:57.131840 containerd[1481]: 2026-03-04 01:03:57.000 [INFO][5115] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 4 01:03:57.131840 containerd[1481]: 2026-03-04 01:03:57.016 [INFO][5115] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 4 01:03:57.131840 containerd[1481]: 2026-03-04 01:03:57.020 [INFO][5115] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 4 01:03:57.131840 containerd[1481]: 2026-03-04 01:03:57.024 [INFO][5115] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 4 01:03:57.131840 containerd[1481]: 2026-03-04 01:03:57.024 [INFO][5115] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cef8018ce80220dcc841999f685037e2bc80a0ff0cb2887de1c7be826e404538" host="localhost" Mar 4 01:03:57.131840 containerd[1481]: 2026-03-04 01:03:57.028 [INFO][5115] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.cef8018ce80220dcc841999f685037e2bc80a0ff0cb2887de1c7be826e404538 Mar 4 01:03:57.131840 containerd[1481]: 2026-03-04 01:03:57.039 [INFO][5115] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cef8018ce80220dcc841999f685037e2bc80a0ff0cb2887de1c7be826e404538" host="localhost" Mar 4 01:03:57.131840 containerd[1481]: 2026-03-04 01:03:57.053 [INFO][5115] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.cef8018ce80220dcc841999f685037e2bc80a0ff0cb2887de1c7be826e404538" host="localhost" Mar 4 01:03:57.131840 containerd[1481]: 2026-03-04 01:03:57.053 [INFO][5115] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.cef8018ce80220dcc841999f685037e2bc80a0ff0cb2887de1c7be826e404538" host="localhost" Mar 4 01:03:57.131840 containerd[1481]: 2026-03-04 01:03:57.053 [INFO][5115] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:03:57.131840 containerd[1481]: 2026-03-04 01:03:57.053 [INFO][5115] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="cef8018ce80220dcc841999f685037e2bc80a0ff0cb2887de1c7be826e404538" HandleID="k8s-pod-network.cef8018ce80220dcc841999f685037e2bc80a0ff0cb2887de1c7be826e404538" Workload="localhost-k8s-goldmane--cccfbd5cf--whcgp-eth0" Mar 4 01:03:57.133306 containerd[1481]: 2026-03-04 01:03:57.066 [INFO][5070] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cef8018ce80220dcc841999f685037e2bc80a0ff0cb2887de1c7be826e404538" Namespace="calico-system" Pod="goldmane-cccfbd5cf-whcgp" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--whcgp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--cccfbd5cf--whcgp-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"883fc4b1-6269-44ab-9fb7-38da3bb836eb", ResourceVersion:"1213", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 2, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-cccfbd5cf-whcgp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6414439ad9a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:03:57.133306 containerd[1481]: 2026-03-04 01:03:57.067 [INFO][5070] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="cef8018ce80220dcc841999f685037e2bc80a0ff0cb2887de1c7be826e404538" Namespace="calico-system" Pod="goldmane-cccfbd5cf-whcgp" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--whcgp-eth0" Mar 4 01:03:57.133306 containerd[1481]: 2026-03-04 01:03:57.067 [INFO][5070] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6414439ad9a ContainerID="cef8018ce80220dcc841999f685037e2bc80a0ff0cb2887de1c7be826e404538" Namespace="calico-system" Pod="goldmane-cccfbd5cf-whcgp" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--whcgp-eth0" Mar 4 01:03:57.133306 containerd[1481]: 2026-03-04 01:03:57.076 [INFO][5070] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cef8018ce80220dcc841999f685037e2bc80a0ff0cb2887de1c7be826e404538" Namespace="calico-system" Pod="goldmane-cccfbd5cf-whcgp" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--whcgp-eth0" Mar 4 01:03:57.133306 containerd[1481]: 2026-03-04 01:03:57.079 [INFO][5070] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cef8018ce80220dcc841999f685037e2bc80a0ff0cb2887de1c7be826e404538" Namespace="calico-system" Pod="goldmane-cccfbd5cf-whcgp" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--whcgp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--cccfbd5cf--whcgp-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"883fc4b1-6269-44ab-9fb7-38da3bb836eb", ResourceVersion:"1213", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 2, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cef8018ce80220dcc841999f685037e2bc80a0ff0cb2887de1c7be826e404538", Pod:"goldmane-cccfbd5cf-whcgp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6414439ad9a", MAC:"4a:f4:48:7c:03:87", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:03:57.133306 containerd[1481]: 2026-03-04 01:03:57.122 [INFO][5070] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cef8018ce80220dcc841999f685037e2bc80a0ff0cb2887de1c7be826e404538" Namespace="calico-system" Pod="goldmane-cccfbd5cf-whcgp" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--whcgp-eth0" Mar 4 01:03:57.153343 containerd[1481]: time="2026-03-04T01:03:57.153234124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-svjm8,Uid:0199c151-24ad-4cf2-ae91-b7e9b350322f,Namespace:kube-system,Attempt:1,} returns sandbox id \"b04db128a8691221c848b483526635d5775d26559f74d00e007013b89e7510e9\"" Mar 4 01:03:57.155682 kubelet[2579]: E0304 01:03:57.155654 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:57.202279 containerd[1481]: time="2026-03-04T01:03:57.202193332Z" level=info msg="CreateContainer within sandbox \"b04db128a8691221c848b483526635d5775d26559f74d00e007013b89e7510e9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 4 01:03:57.217704 containerd[1481]: time="2026-03-04T01:03:57.217240967Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:03:57.217704 containerd[1481]: time="2026-03-04T01:03:57.217313220Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:03:57.217878 containerd[1481]: time="2026-03-04T01:03:57.217781391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66b64679fb-6fgtf,Uid:9c0b93c3-34c4-4c8f-bfc1-54f9448d999f,Namespace:calico-system,Attempt:1,} returns sandbox id \"c52ba9bdffc8e51b15c764b6d03439696a9e4f44d02ef7154b012feb8c72e1d6\"" Mar 4 01:03:57.219331 containerd[1481]: time="2026-03-04T01:03:57.217337323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:03:57.221617 containerd[1481]: time="2026-03-04T01:03:57.219966526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:03:57.244870 containerd[1481]: time="2026-03-04T01:03:57.244770683Z" level=info msg="CreateContainer within sandbox \"b04db128a8691221c848b483526635d5775d26559f74d00e007013b89e7510e9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b3637b100051454b221a0be28dfce088a6bd41415db03cc04ddab304a546a332\"" Mar 4 01:03:57.246564 containerd[1481]: time="2026-03-04T01:03:57.246501851Z" level=info msg="StartContainer for \"b3637b100051454b221a0be28dfce088a6bd41415db03cc04ddab304a546a332\"" Mar 4 01:03:57.266563 systemd[1]: Started cri-containerd-cef8018ce80220dcc841999f685037e2bc80a0ff0cb2887de1c7be826e404538.scope - libcontainer container cef8018ce80220dcc841999f685037e2bc80a0ff0cb2887de1c7be826e404538. Mar 4 01:03:57.291541 systemd[1]: Started cri-containerd-b3637b100051454b221a0be28dfce088a6bd41415db03cc04ddab304a546a332.scope - libcontainer container b3637b100051454b221a0be28dfce088a6bd41415db03cc04ddab304a546a332. Mar 4 01:03:57.327967 kubelet[2579]: E0304 01:03:57.327747 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:57.338300 systemd-resolved[1339]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 4 01:03:57.415146 containerd[1481]: time="2026-03-04T01:03:57.414533718Z" level=info msg="StartContainer for \"b3637b100051454b221a0be28dfce088a6bd41415db03cc04ddab304a546a332\" returns successfully" Mar 4 01:03:57.418181 kubelet[2579]: I0304 01:03:57.417247 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-rr9gl" podStartSLOduration=99.417224114 podStartE2EDuration="1m39.417224114s" podCreationTimestamp="2026-03-04 01:02:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 01:03:57.37480866 +0000 UTC m=+103.346870452" watchObservedRunningTime="2026-03-04 01:03:57.417224114 +0000 UTC m=+103.389285906" Mar 4 01:03:57.458707 containerd[1481]: time="2026-03-04T01:03:57.458442083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-whcgp,Uid:883fc4b1-6269-44ab-9fb7-38da3bb836eb,Namespace:calico-system,Attempt:1,} returns sandbox id \"cef8018ce80220dcc841999f685037e2bc80a0ff0cb2887de1c7be826e404538\"" Mar 4 01:03:57.742429 systemd[1]: Started sshd@8-10.0.0.50:22-10.0.0.1:40776.service - OpenSSH per-connection server daemon (10.0.0.1:40776). Mar 4 01:03:57.804080 sshd[5350]: Accepted publickey for core from 10.0.0.1 port 40776 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:03:57.809791 sshd[5350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:03:57.821197 systemd-logind[1450]: New session 9 of user core. Mar 4 01:03:57.832006 systemd-networkd[1397]: cali6e98f0db6aa: Gained IPv6LL Mar 4 01:03:57.833118 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 4 01:03:58.125858 sshd[5350]: pam_unix(sshd:session): session closed for user core Mar 4 01:03:58.150625 systemd[1]: sshd@8-10.0.0.50:22-10.0.0.1:40776.service: Deactivated successfully. Mar 4 01:03:58.157313 systemd[1]: session-9.scope: Deactivated successfully. Mar 4 01:03:58.163933 systemd-logind[1450]: Session 9 logged out. Waiting for processes to exit. Mar 4 01:03:58.168117 systemd-logind[1450]: Removed session 9. Mar 4 01:03:58.384280 kubelet[2579]: E0304 01:03:58.374669 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:58.384280 kubelet[2579]: E0304 01:03:58.375892 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:58.472734 systemd-networkd[1397]: cali6414439ad9a: Gained IPv6LL Mar 4 01:03:58.491001 kubelet[2579]: I0304 01:03:58.489617 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-svjm8" podStartSLOduration=100.489549002 podStartE2EDuration="1m40.489549002s" podCreationTimestamp="2026-03-04 01:02:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 01:03:58.434335327 +0000 UTC m=+104.406397139" watchObservedRunningTime="2026-03-04 01:03:58.489549002 +0000 UTC m=+104.461610854" Mar 4 01:03:58.536253 systemd-networkd[1397]: cali4c64575a74f: Gained IPv6LL Mar 4 01:03:58.588495 containerd[1481]: time="2026-03-04T01:03:58.588312352Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:03:58.595133 containerd[1481]: time="2026-03-04T01:03:58.594934974Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 4 01:03:58.606163 containerd[1481]: time="2026-03-04T01:03:58.606095074Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:03:58.615632 containerd[1481]: time="2026-03-04T01:03:58.614497310Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:03:58.616766 containerd[1481]: time="2026-03-04T01:03:58.616706280Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 5.018667259s" Mar 4 01:03:58.616961 containerd[1481]: time="2026-03-04T01:03:58.616769597Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 4 01:03:58.623134 containerd[1481]: time="2026-03-04T01:03:58.622610347Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 4 01:03:58.641505 containerd[1481]: time="2026-03-04T01:03:58.640511128Z" level=info msg="CreateContainer within sandbox \"681da3428fbae16e0316f4f336398533eb5b7b85080af4fcf42c48a39a4e1e34\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 4 01:03:58.693652 containerd[1481]: time="2026-03-04T01:03:58.692213208Z" level=info msg="CreateContainer within sandbox \"681da3428fbae16e0316f4f336398533eb5b7b85080af4fcf42c48a39a4e1e34\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"efca95bd58d38e01e58cc7751464f498d7d15b818812f076faa49eef46ad9efc\"" Mar 4 01:03:58.694498 containerd[1481]: time="2026-03-04T01:03:58.694418971Z" level=info msg="StartContainer for \"efca95bd58d38e01e58cc7751464f498d7d15b818812f076faa49eef46ad9efc\"" Mar 4 01:03:58.761067 systemd[1]: Started cri-containerd-efca95bd58d38e01e58cc7751464f498d7d15b818812f076faa49eef46ad9efc.scope - libcontainer container efca95bd58d38e01e58cc7751464f498d7d15b818812f076faa49eef46ad9efc. Mar 4 01:03:58.870864 containerd[1481]: time="2026-03-04T01:03:58.870765235Z" level=info msg="StartContainer for \"efca95bd58d38e01e58cc7751464f498d7d15b818812f076faa49eef46ad9efc\" returns successfully" Mar 4 01:03:58.962444 systemd[1]: run-containerd-runc-k8s.io-efca95bd58d38e01e58cc7751464f498d7d15b818812f076faa49eef46ad9efc-runc.0tmXQz.mount: Deactivated successfully. Mar 4 01:03:59.383856 kubelet[2579]: E0304 01:03:59.383154 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:59.396667 kubelet[2579]: E0304 01:03:59.385435 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:03:59.744023 containerd[1481]: time="2026-03-04T01:03:59.743884933Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:03:59.749418 containerd[1481]: time="2026-03-04T01:03:59.747957749Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 4 01:03:59.755983 containerd[1481]: time="2026-03-04T01:03:59.755772529Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:03:59.766439 containerd[1481]: time="2026-03-04T01:03:59.764967494Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:03:59.766704 containerd[1481]: time="2026-03-04T01:03:59.766458799Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.14378776s" Mar 4 01:03:59.766704 containerd[1481]: time="2026-03-04T01:03:59.766509232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 4 01:03:59.774129 containerd[1481]: time="2026-03-04T01:03:59.773933383Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 4 01:03:59.788536 containerd[1481]: time="2026-03-04T01:03:59.787659784Z" level=info msg="CreateContainer within sandbox \"58cf5fa474792a227a4c7a31eae3a9f24b46a5b1bd46e276a8f761ceb3210e3a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 4 01:03:59.850450 containerd[1481]: time="2026-03-04T01:03:59.850276259Z" level=info msg="CreateContainer within sandbox \"58cf5fa474792a227a4c7a31eae3a9f24b46a5b1bd46e276a8f761ceb3210e3a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ee4e5ab47658f2d351bd40874162057062ff90ce70a70308f95708038dfa62e6\"" Mar 4 01:03:59.857100 containerd[1481]: time="2026-03-04T01:03:59.851978575Z" level=info msg="StartContainer for \"ee4e5ab47658f2d351bd40874162057062ff90ce70a70308f95708038dfa62e6\"" Mar 4 01:03:59.864986 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1399815532.mount: Deactivated successfully. Mar 4 01:04:00.008989 systemd[1]: Started cri-containerd-ee4e5ab47658f2d351bd40874162057062ff90ce70a70308f95708038dfa62e6.scope - libcontainer container ee4e5ab47658f2d351bd40874162057062ff90ce70a70308f95708038dfa62e6. Mar 4 01:04:00.156459 containerd[1481]: time="2026-03-04T01:04:00.155762952Z" level=info msg="StartContainer for \"ee4e5ab47658f2d351bd40874162057062ff90ce70a70308f95708038dfa62e6\" returns successfully" Mar 4 01:04:00.417998 kubelet[2579]: E0304 01:04:00.413890 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:04:02.023074 kubelet[2579]: I0304 01:04:02.022987 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-66b64679fb-hzv4z" podStartSLOduration=66.997197747 podStartE2EDuration="1m12.022960444s" podCreationTimestamp="2026-03-04 01:02:50 +0000 UTC" firstStartedPulling="2026-03-04 01:03:53.596323449 +0000 UTC m=+99.568385240" lastFinishedPulling="2026-03-04 01:03:58.622086135 +0000 UTC m=+104.594147937" observedRunningTime="2026-03-04 01:03:59.432647397 +0000 UTC m=+105.404709219" watchObservedRunningTime="2026-03-04 01:04:02.022960444 +0000 UTC m=+107.995022246" Mar 4 01:04:02.660658 containerd[1481]: time="2026-03-04T01:04:02.660502271Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:04:02.661712 containerd[1481]: time="2026-03-04T01:04:02.661620274Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 4 01:04:02.663556 containerd[1481]: time="2026-03-04T01:04:02.663334885Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:04:02.667406 containerd[1481]: time="2026-03-04T01:04:02.667293968Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:04:02.668780 containerd[1481]: time="2026-03-04T01:04:02.668673221Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 2.894652567s" Mar 4 01:04:02.668780 containerd[1481]: time="2026-03-04T01:04:02.668727932Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 4 01:04:02.671007 containerd[1481]: time="2026-03-04T01:04:02.670679298Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 4 01:04:02.696461 containerd[1481]: time="2026-03-04T01:04:02.696319666Z" level=info msg="CreateContainer within sandbox \"5c51a434e0c14b8b73a1cb97d58470255d891cf4da2baba50416ff8fa3e8496d\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 4 01:04:02.722105 containerd[1481]: time="2026-03-04T01:04:02.721949285Z" level=info msg="CreateContainer within sandbox \"5c51a434e0c14b8b73a1cb97d58470255d891cf4da2baba50416ff8fa3e8496d\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"d3da32ea7f406f29550205d877b0f7d1469cce9c8cfdfef47c6235bc13e3dc92\"" Mar 4 01:04:02.723602 containerd[1481]: time="2026-03-04T01:04:02.723437380Z" level=info msg="StartContainer for \"d3da32ea7f406f29550205d877b0f7d1469cce9c8cfdfef47c6235bc13e3dc92\"" Mar 4 01:04:02.788119 containerd[1481]: time="2026-03-04T01:04:02.787980825Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:04:02.790905 containerd[1481]: time="2026-03-04T01:04:02.790476352Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 4 01:04:02.797783 containerd[1481]: time="2026-03-04T01:04:02.797677156Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 126.949398ms" Mar 4 01:04:02.797783 containerd[1481]: time="2026-03-04T01:04:02.797778311Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 4 01:04:02.799726 containerd[1481]: time="2026-03-04T01:04:02.799641909Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 4 01:04:02.808847 containerd[1481]: time="2026-03-04T01:04:02.808749893Z" level=info msg="CreateContainer within sandbox \"c52ba9bdffc8e51b15c764b6d03439696a9e4f44d02ef7154b012feb8c72e1d6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 4 01:04:02.810802 systemd[1]: Started cri-containerd-d3da32ea7f406f29550205d877b0f7d1469cce9c8cfdfef47c6235bc13e3dc92.scope - libcontainer container d3da32ea7f406f29550205d877b0f7d1469cce9c8cfdfef47c6235bc13e3dc92. Mar 4 01:04:02.833063 containerd[1481]: time="2026-03-04T01:04:02.832469440Z" level=info msg="CreateContainer within sandbox \"c52ba9bdffc8e51b15c764b6d03439696a9e4f44d02ef7154b012feb8c72e1d6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"be19990c5c0cf92f27aaa50f6b7eefaab9e4d520edd68f956d8fd55226bf0b31\"" Mar 4 01:04:02.835583 containerd[1481]: time="2026-03-04T01:04:02.835455996Z" level=info msg="StartContainer for \"be19990c5c0cf92f27aaa50f6b7eefaab9e4d520edd68f956d8fd55226bf0b31\"" Mar 4 01:04:02.887021 systemd[1]: Started cri-containerd-be19990c5c0cf92f27aaa50f6b7eefaab9e4d520edd68f956d8fd55226bf0b31.scope - libcontainer container be19990c5c0cf92f27aaa50f6b7eefaab9e4d520edd68f956d8fd55226bf0b31. Mar 4 01:04:02.909089 containerd[1481]: time="2026-03-04T01:04:02.908961892Z" level=info msg="StartContainer for \"d3da32ea7f406f29550205d877b0f7d1469cce9c8cfdfef47c6235bc13e3dc92\" returns successfully" Mar 4 01:04:02.952107 containerd[1481]: time="2026-03-04T01:04:02.951817138Z" level=info msg="StartContainer for \"be19990c5c0cf92f27aaa50f6b7eefaab9e4d520edd68f956d8fd55226bf0b31\" returns successfully" Mar 4 01:04:03.148290 systemd[1]: Started sshd@9-10.0.0.50:22-10.0.0.1:36116.service - OpenSSH per-connection server daemon (10.0.0.1:36116). Mar 4 01:04:03.242425 sshd[5575]: Accepted publickey for core from 10.0.0.1 port 36116 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:04:03.246056 sshd[5575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:04:03.257291 systemd-logind[1450]: New session 10 of user core. Mar 4 01:04:03.263801 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 4 01:04:03.465478 kubelet[2579]: I0304 01:04:03.465305 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6dbf4f54f5-flgh7" podStartSLOduration=64.993115335 podStartE2EDuration="1m12.465223356s" podCreationTimestamp="2026-03-04 01:02:51 +0000 UTC" firstStartedPulling="2026-03-04 01:03:55.197831452 +0000 UTC m=+101.169893244" lastFinishedPulling="2026-03-04 01:04:02.669939472 +0000 UTC m=+108.642001265" observedRunningTime="2026-03-04 01:04:03.463567892 +0000 UTC m=+109.435629715" watchObservedRunningTime="2026-03-04 01:04:03.465223356 +0000 UTC m=+109.437285187" Mar 4 01:04:03.518589 kubelet[2579]: I0304 01:04:03.518294 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-66b64679fb-6fgtf" podStartSLOduration=67.941045291 podStartE2EDuration="1m13.518264228s" podCreationTimestamp="2026-03-04 01:02:50 +0000 UTC" firstStartedPulling="2026-03-04 01:03:57.221961891 +0000 UTC m=+103.194023683" lastFinishedPulling="2026-03-04 01:04:02.799180828 +0000 UTC m=+108.771242620" observedRunningTime="2026-03-04 01:04:03.49832325 +0000 UTC m=+109.470385052" watchObservedRunningTime="2026-03-04 01:04:03.518264228 +0000 UTC m=+109.490326081" Mar 4 01:04:03.724828 sshd[5575]: pam_unix(sshd:session): session closed for user core Mar 4 01:04:03.731023 systemd[1]: sshd@9-10.0.0.50:22-10.0.0.1:36116.service: Deactivated successfully. Mar 4 01:04:03.733814 systemd[1]: session-10.scope: Deactivated successfully. Mar 4 01:04:03.735104 systemd-logind[1450]: Session 10 logged out. Waiting for processes to exit. Mar 4 01:04:03.737320 systemd-logind[1450]: Removed session 10. Mar 4 01:04:05.317620 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3563389382.mount: Deactivated successfully. Mar 4 01:04:08.338808 containerd[1481]: time="2026-03-04T01:04:08.338474563Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:04:08.346074 containerd[1481]: time="2026-03-04T01:04:08.344860933Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 4 01:04:08.354231 containerd[1481]: time="2026-03-04T01:04:08.351721991Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:04:08.444845 containerd[1481]: time="2026-03-04T01:04:08.436481380Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:04:08.444845 containerd[1481]: time="2026-03-04T01:04:08.438832331Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 5.639115263s" Mar 4 01:04:08.444845 containerd[1481]: time="2026-03-04T01:04:08.444747058Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 4 01:04:08.454801 containerd[1481]: time="2026-03-04T01:04:08.451482323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 4 01:04:08.471662 containerd[1481]: time="2026-03-04T01:04:08.471554352Z" level=info msg="CreateContainer within sandbox \"cef8018ce80220dcc841999f685037e2bc80a0ff0cb2887de1c7be826e404538\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 4 01:04:08.545211 containerd[1481]: time="2026-03-04T01:04:08.542058084Z" level=info msg="CreateContainer within sandbox \"cef8018ce80220dcc841999f685037e2bc80a0ff0cb2887de1c7be826e404538\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"baaddd55da6cd82f74c02f1a0e324457d6385bef811f3071324179b1a4c33f14\"" Mar 4 01:04:08.545211 containerd[1481]: time="2026-03-04T01:04:08.543982797Z" level=info msg="StartContainer for \"baaddd55da6cd82f74c02f1a0e324457d6385bef811f3071324179b1a4c33f14\"" Mar 4 01:04:08.704062 systemd[1]: Started cri-containerd-baaddd55da6cd82f74c02f1a0e324457d6385bef811f3071324179b1a4c33f14.scope - libcontainer container baaddd55da6cd82f74c02f1a0e324457d6385bef811f3071324179b1a4c33f14. Mar 4 01:04:08.784240 systemd[1]: Started sshd@10-10.0.0.50:22-10.0.0.1:36124.service - OpenSSH per-connection server daemon (10.0.0.1:36124). Mar 4 01:04:08.983550 containerd[1481]: time="2026-03-04T01:04:08.982897669Z" level=info msg="StartContainer for \"baaddd55da6cd82f74c02f1a0e324457d6385bef811f3071324179b1a4c33f14\" returns successfully" Mar 4 01:04:09.074103 sshd[5671]: Accepted publickey for core from 10.0.0.1 port 36124 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:04:09.082109 sshd[5671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:04:09.103575 systemd-logind[1450]: New session 11 of user core. Mar 4 01:04:09.113953 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 4 01:04:09.795264 sshd[5671]: pam_unix(sshd:session): session closed for user core Mar 4 01:04:09.809230 systemd[1]: sshd@10-10.0.0.50:22-10.0.0.1:36124.service: Deactivated successfully. Mar 4 01:04:09.814343 systemd[1]: session-11.scope: Deactivated successfully. Mar 4 01:04:09.821295 systemd-logind[1450]: Session 11 logged out. Waiting for processes to exit. Mar 4 01:04:09.824999 systemd-logind[1450]: Removed session 11. Mar 4 01:04:11.097644 containerd[1481]: time="2026-03-04T01:04:11.097526075Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:04:11.099483 containerd[1481]: time="2026-03-04T01:04:11.099244490Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 4 01:04:11.104857 containerd[1481]: time="2026-03-04T01:04:11.104579657Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:04:11.114940 containerd[1481]: time="2026-03-04T01:04:11.114780435Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:04:11.115851 containerd[1481]: time="2026-03-04T01:04:11.115763892Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 2.664227409s" Mar 4 01:04:11.115851 containerd[1481]: time="2026-03-04T01:04:11.115838429Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 4 01:04:11.125801 containerd[1481]: time="2026-03-04T01:04:11.125279015Z" level=info msg="CreateContainer within sandbox \"58cf5fa474792a227a4c7a31eae3a9f24b46a5b1bd46e276a8f761ceb3210e3a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 4 01:04:11.235005 containerd[1481]: time="2026-03-04T01:04:11.234809287Z" level=info msg="CreateContainer within sandbox \"58cf5fa474792a227a4c7a31eae3a9f24b46a5b1bd46e276a8f761ceb3210e3a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"3135e3d2206daeed759ad665e1a4df05f91fa6affff7f677f2f41fb65db25523\"" Mar 4 01:04:11.236909 containerd[1481]: time="2026-03-04T01:04:11.236815133Z" level=info msg="StartContainer for \"3135e3d2206daeed759ad665e1a4df05f91fa6affff7f677f2f41fb65db25523\"" Mar 4 01:04:11.294019 systemd[1]: Started cri-containerd-3135e3d2206daeed759ad665e1a4df05f91fa6affff7f677f2f41fb65db25523.scope - libcontainer container 3135e3d2206daeed759ad665e1a4df05f91fa6affff7f677f2f41fb65db25523. Mar 4 01:04:11.349490 containerd[1481]: time="2026-03-04T01:04:11.349136726Z" level=info msg="StartContainer for \"3135e3d2206daeed759ad665e1a4df05f91fa6affff7f677f2f41fb65db25523\" returns successfully" Mar 4 01:04:11.640540 kubelet[2579]: I0304 01:04:11.636704 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-whcgp" podStartSLOduration=70.650727157 podStartE2EDuration="1m21.636677392s" podCreationTimestamp="2026-03-04 01:02:50 +0000 UTC" firstStartedPulling="2026-03-04 01:03:57.461160641 +0000 UTC m=+103.433222432" lastFinishedPulling="2026-03-04 01:04:08.447110865 +0000 UTC m=+114.419172667" observedRunningTime="2026-03-04 01:04:09.569861012 +0000 UTC m=+115.541922834" watchObservedRunningTime="2026-03-04 01:04:11.636677392 +0000 UTC m=+117.608739225" Mar 4 01:04:12.209155 kubelet[2579]: I0304 01:04:12.208996 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-rs6f4" podStartSLOduration=65.248317666 podStartE2EDuration="1m21.208970161s" podCreationTimestamp="2026-03-04 01:02:51 +0000 UTC" firstStartedPulling="2026-03-04 01:03:55.156504828 +0000 UTC m=+101.128566621" lastFinishedPulling="2026-03-04 01:04:11.117157324 +0000 UTC m=+117.089219116" observedRunningTime="2026-03-04 01:04:11.641602174 +0000 UTC m=+117.613663996" watchObservedRunningTime="2026-03-04 01:04:12.208970161 +0000 UTC m=+118.181031983" Mar 4 01:04:12.369111 kubelet[2579]: I0304 01:04:12.368942 2579 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 4 01:04:12.377880 kubelet[2579]: I0304 01:04:12.377756 2579 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 4 01:04:14.553680 containerd[1481]: time="2026-03-04T01:04:14.552798027Z" level=info msg="StopPodSandbox for \"e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752\"" Mar 4 01:04:14.829752 systemd[1]: Started sshd@11-10.0.0.50:22-10.0.0.1:37410.service - OpenSSH per-connection server daemon (10.0.0.1:37410). Mar 4 01:04:15.009111 sshd[5829]: Accepted publickey for core from 10.0.0.1 port 37410 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:04:15.017566 sshd[5829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:04:15.043562 systemd-logind[1450]: New session 12 of user core. Mar 4 01:04:15.054310 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 4 01:04:15.174310 containerd[1481]: 2026-03-04 01:04:14.818 [WARNING][5819] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66b64679fb--hzv4z-eth0", GenerateName:"calico-apiserver-66b64679fb-", Namespace:"calico-system", SelfLink:"", UID:"55e105c4-cb80-455a-abe4-3f8ab66ac4c8", ResourceVersion:"1294", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 2, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66b64679fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"681da3428fbae16e0316f4f336398533eb5b7b85080af4fcf42c48a39a4e1e34", Pod:"calico-apiserver-66b64679fb-hzv4z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali80bc3f7dafa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:04:15.174310 containerd[1481]: 2026-03-04 01:04:14.822 [INFO][5819] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752" Mar 4 01:04:15.174310 containerd[1481]: 2026-03-04 01:04:14.822 [INFO][5819] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752" iface="eth0" netns="" Mar 4 01:04:15.174310 containerd[1481]: 2026-03-04 01:04:14.822 [INFO][5819] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752" Mar 4 01:04:15.174310 containerd[1481]: 2026-03-04 01:04:14.822 [INFO][5819] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752" Mar 4 01:04:15.174310 containerd[1481]: 2026-03-04 01:04:15.109 [INFO][5831] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752" HandleID="k8s-pod-network.e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752" Workload="localhost-k8s-calico--apiserver--66b64679fb--hzv4z-eth0" Mar 4 01:04:15.174310 containerd[1481]: 2026-03-04 01:04:15.111 [INFO][5831] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:04:15.174310 containerd[1481]: 2026-03-04 01:04:15.111 [INFO][5831] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:04:15.174310 containerd[1481]: 2026-03-04 01:04:15.142 [WARNING][5831] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752" HandleID="k8s-pod-network.e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752" Workload="localhost-k8s-calico--apiserver--66b64679fb--hzv4z-eth0" Mar 4 01:04:15.174310 containerd[1481]: 2026-03-04 01:04:15.142 [INFO][5831] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752" HandleID="k8s-pod-network.e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752" Workload="localhost-k8s-calico--apiserver--66b64679fb--hzv4z-eth0" Mar 4 01:04:15.174310 containerd[1481]: 2026-03-04 01:04:15.152 [INFO][5831] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:04:15.174310 containerd[1481]: 2026-03-04 01:04:15.162 [INFO][5819] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752" Mar 4 01:04:15.174310 containerd[1481]: time="2026-03-04T01:04:15.173787046Z" level=info msg="TearDown network for sandbox \"e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752\" successfully" Mar 4 01:04:15.174310 containerd[1481]: time="2026-03-04T01:04:15.173842458Z" level=info msg="StopPodSandbox for \"e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752\" returns successfully" Mar 4 01:04:15.251942 containerd[1481]: time="2026-03-04T01:04:15.251820063Z" level=info msg="RemovePodSandbox for \"e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752\"" Mar 4 01:04:15.256580 containerd[1481]: time="2026-03-04T01:04:15.256339167Z" level=info msg="Forcibly stopping sandbox \"e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752\"" Mar 4 01:04:15.471195 containerd[1481]: 2026-03-04 01:04:15.357 [WARNING][5858] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66b64679fb--hzv4z-eth0", GenerateName:"calico-apiserver-66b64679fb-", Namespace:"calico-system", SelfLink:"", UID:"55e105c4-cb80-455a-abe4-3f8ab66ac4c8", ResourceVersion:"1294", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 2, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66b64679fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"681da3428fbae16e0316f4f336398533eb5b7b85080af4fcf42c48a39a4e1e34", Pod:"calico-apiserver-66b64679fb-hzv4z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali80bc3f7dafa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:04:15.471195 containerd[1481]: 2026-03-04 01:04:15.357 [INFO][5858] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752" Mar 4 01:04:15.471195 containerd[1481]: 2026-03-04 01:04:15.357 [INFO][5858] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752" iface="eth0" netns="" Mar 4 01:04:15.471195 containerd[1481]: 2026-03-04 01:04:15.357 [INFO][5858] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752" Mar 4 01:04:15.471195 containerd[1481]: 2026-03-04 01:04:15.357 [INFO][5858] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752" Mar 4 01:04:15.471195 containerd[1481]: 2026-03-04 01:04:15.421 [INFO][5868] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752" HandleID="k8s-pod-network.e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752" Workload="localhost-k8s-calico--apiserver--66b64679fb--hzv4z-eth0" Mar 4 01:04:15.471195 containerd[1481]: 2026-03-04 01:04:15.421 [INFO][5868] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:04:15.471195 containerd[1481]: 2026-03-04 01:04:15.422 [INFO][5868] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:04:15.471195 containerd[1481]: 2026-03-04 01:04:15.444 [WARNING][5868] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752" HandleID="k8s-pod-network.e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752" Workload="localhost-k8s-calico--apiserver--66b64679fb--hzv4z-eth0" Mar 4 01:04:15.471195 containerd[1481]: 2026-03-04 01:04:15.444 [INFO][5868] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752" HandleID="k8s-pod-network.e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752" Workload="localhost-k8s-calico--apiserver--66b64679fb--hzv4z-eth0" Mar 4 01:04:15.471195 containerd[1481]: 2026-03-04 01:04:15.452 [INFO][5868] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:04:15.471195 containerd[1481]: 2026-03-04 01:04:15.463 [INFO][5858] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752" Mar 4 01:04:15.471195 containerd[1481]: time="2026-03-04T01:04:15.469817608Z" level=info msg="TearDown network for sandbox \"e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752\" successfully" Mar 4 01:04:15.560715 containerd[1481]: time="2026-03-04T01:04:15.560567922Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 01:04:15.561296 containerd[1481]: time="2026-03-04T01:04:15.560757181Z" level=info msg="RemovePodSandbox \"e6e402f60807506f6a5e127c496bfd5b77ab1ad55041f0987d496ff333bfd752\" returns successfully" Mar 4 01:04:15.561584 sshd[5829]: pam_unix(sshd:session): session closed for user core Mar 4 01:04:15.577495 systemd[1]: sshd@11-10.0.0.50:22-10.0.0.1:37410.service: Deactivated successfully. Mar 4 01:04:15.582724 containerd[1481]: time="2026-03-04T01:04:15.582596492Z" level=info msg="StopPodSandbox for \"793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4\"" Mar 4 01:04:15.590340 systemd[1]: session-12.scope: Deactivated successfully. Mar 4 01:04:15.608962 systemd-logind[1450]: Session 12 logged out. Waiting for processes to exit. Mar 4 01:04:15.617813 systemd-logind[1450]: Removed session 12. Mar 4 01:04:15.848869 containerd[1481]: 2026-03-04 01:04:15.719 [WARNING][5888] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--rr9gl-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"ba3a3117-2ed7-420f-9281-01467babd9c7", ResourceVersion:"1241", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 2, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"31401792a14f9c016f73a2ba6c595de28b5fec0ee143ace48883fb344915a269", Pod:"coredns-66bc5c9577-rr9gl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali57bb53015a6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:04:15.848869 containerd[1481]: 2026-03-04 01:04:15.719 [INFO][5888] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4" Mar 4 01:04:15.848869 containerd[1481]: 2026-03-04 01:04:15.719 [INFO][5888] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4" iface="eth0" netns="" Mar 4 01:04:15.848869 containerd[1481]: 2026-03-04 01:04:15.720 [INFO][5888] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4" Mar 4 01:04:15.848869 containerd[1481]: 2026-03-04 01:04:15.720 [INFO][5888] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4" Mar 4 01:04:15.848869 containerd[1481]: 2026-03-04 01:04:15.819 [INFO][5896] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4" HandleID="k8s-pod-network.793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4" Workload="localhost-k8s-coredns--66bc5c9577--rr9gl-eth0" Mar 4 01:04:15.848869 containerd[1481]: 2026-03-04 01:04:15.820 [INFO][5896] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:04:15.848869 containerd[1481]: 2026-03-04 01:04:15.821 [INFO][5896] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:04:15.848869 containerd[1481]: 2026-03-04 01:04:15.835 [WARNING][5896] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4" HandleID="k8s-pod-network.793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4" Workload="localhost-k8s-coredns--66bc5c9577--rr9gl-eth0" Mar 4 01:04:15.848869 containerd[1481]: 2026-03-04 01:04:15.835 [INFO][5896] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4" HandleID="k8s-pod-network.793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4" Workload="localhost-k8s-coredns--66bc5c9577--rr9gl-eth0" Mar 4 01:04:15.848869 containerd[1481]: 2026-03-04 01:04:15.839 [INFO][5896] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:04:15.848869 containerd[1481]: 2026-03-04 01:04:15.843 [INFO][5888] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4" Mar 4 01:04:15.849765 containerd[1481]: time="2026-03-04T01:04:15.849064755Z" level=info msg="TearDown network for sandbox \"793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4\" successfully" Mar 4 01:04:15.849765 containerd[1481]: time="2026-03-04T01:04:15.849106402Z" level=info msg="StopPodSandbox for \"793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4\" returns successfully" Mar 4 01:04:15.850207 containerd[1481]: time="2026-03-04T01:04:15.850101152Z" level=info msg="RemovePodSandbox for \"793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4\"" Mar 4 01:04:15.850207 containerd[1481]: time="2026-03-04T01:04:15.850148168Z" level=info msg="Forcibly stopping sandbox \"793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4\"" Mar 4 01:04:16.016457 containerd[1481]: 2026-03-04 01:04:15.940 [WARNING][5913] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--rr9gl-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"ba3a3117-2ed7-420f-9281-01467babd9c7", ResourceVersion:"1241", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 2, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"31401792a14f9c016f73a2ba6c595de28b5fec0ee143ace48883fb344915a269", Pod:"coredns-66bc5c9577-rr9gl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali57bb53015a6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:04:16.016457 containerd[1481]: 2026-03-04 01:04:15.941 [INFO][5913] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4" Mar 4 01:04:16.016457 containerd[1481]: 2026-03-04 01:04:15.941 [INFO][5913] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4" iface="eth0" netns="" Mar 4 01:04:16.016457 containerd[1481]: 2026-03-04 01:04:15.941 [INFO][5913] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4" Mar 4 01:04:16.016457 containerd[1481]: 2026-03-04 01:04:15.941 [INFO][5913] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4" Mar 4 01:04:16.016457 containerd[1481]: 2026-03-04 01:04:15.981 [INFO][5922] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4" HandleID="k8s-pod-network.793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4" Workload="localhost-k8s-coredns--66bc5c9577--rr9gl-eth0" Mar 4 01:04:16.016457 containerd[1481]: 2026-03-04 01:04:15.981 [INFO][5922] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:04:16.016457 containerd[1481]: 2026-03-04 01:04:15.981 [INFO][5922] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:04:16.016457 containerd[1481]: 2026-03-04 01:04:15.995 [WARNING][5922] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4" HandleID="k8s-pod-network.793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4" Workload="localhost-k8s-coredns--66bc5c9577--rr9gl-eth0" Mar 4 01:04:16.016457 containerd[1481]: 2026-03-04 01:04:15.995 [INFO][5922] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4" HandleID="k8s-pod-network.793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4" Workload="localhost-k8s-coredns--66bc5c9577--rr9gl-eth0" Mar 4 01:04:16.016457 containerd[1481]: 2026-03-04 01:04:16.003 [INFO][5922] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:04:16.016457 containerd[1481]: 2026-03-04 01:04:16.006 [INFO][5913] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4" Mar 4 01:04:16.016457 containerd[1481]: time="2026-03-04T01:04:16.013788280Z" level=info msg="TearDown network for sandbox \"793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4\" successfully" Mar 4 01:04:16.023933 containerd[1481]: time="2026-03-04T01:04:16.023782363Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 01:04:16.024153 containerd[1481]: time="2026-03-04T01:04:16.024007478Z" level=info msg="RemovePodSandbox \"793450cf9f29652d213482d8228200bf22c7294677fde4ba7753e7a5bd5c43b4\" returns successfully" Mar 4 01:04:16.025102 containerd[1481]: time="2026-03-04T01:04:16.025000735Z" level=info msg="StopPodSandbox for \"01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0\"" Mar 4 01:04:16.177275 containerd[1481]: 2026-03-04 01:04:16.109 [WARNING][5939] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--cccfbd5cf--whcgp-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"883fc4b1-6269-44ab-9fb7-38da3bb836eb", ResourceVersion:"1348", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 2, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cef8018ce80220dcc841999f685037e2bc80a0ff0cb2887de1c7be826e404538", Pod:"goldmane-cccfbd5cf-whcgp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6414439ad9a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:04:16.177275 containerd[1481]: 2026-03-04 01:04:16.109 [INFO][5939] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0" Mar 4 01:04:16.177275 containerd[1481]: 2026-03-04 01:04:16.109 [INFO][5939] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0" iface="eth0" netns="" Mar 4 01:04:16.177275 containerd[1481]: 2026-03-04 01:04:16.109 [INFO][5939] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0" Mar 4 01:04:16.177275 containerd[1481]: 2026-03-04 01:04:16.109 [INFO][5939] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0" Mar 4 01:04:16.177275 containerd[1481]: 2026-03-04 01:04:16.154 [INFO][5948] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0" HandleID="k8s-pod-network.01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0" Workload="localhost-k8s-goldmane--cccfbd5cf--whcgp-eth0" Mar 4 01:04:16.177275 containerd[1481]: 2026-03-04 01:04:16.154 [INFO][5948] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:04:16.177275 containerd[1481]: 2026-03-04 01:04:16.154 [INFO][5948] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:04:16.177275 containerd[1481]: 2026-03-04 01:04:16.164 [WARNING][5948] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0" HandleID="k8s-pod-network.01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0" Workload="localhost-k8s-goldmane--cccfbd5cf--whcgp-eth0" Mar 4 01:04:16.177275 containerd[1481]: 2026-03-04 01:04:16.165 [INFO][5948] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0" HandleID="k8s-pod-network.01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0" Workload="localhost-k8s-goldmane--cccfbd5cf--whcgp-eth0" Mar 4 01:04:16.177275 containerd[1481]: 2026-03-04 01:04:16.168 [INFO][5948] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:04:16.177275 containerd[1481]: 2026-03-04 01:04:16.172 [INFO][5939] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0" Mar 4 01:04:16.177275 containerd[1481]: time="2026-03-04T01:04:16.177097614Z" level=info msg="TearDown network for sandbox \"01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0\" successfully" Mar 4 01:04:16.177275 containerd[1481]: time="2026-03-04T01:04:16.177134784Z" level=info msg="StopPodSandbox for \"01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0\" returns successfully" Mar 4 01:04:16.179128 containerd[1481]: time="2026-03-04T01:04:16.177892134Z" level=info msg="RemovePodSandbox for \"01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0\"" Mar 4 01:04:16.179128 containerd[1481]: time="2026-03-04T01:04:16.177930415Z" level=info msg="Forcibly stopping sandbox \"01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0\"" Mar 4 01:04:16.319241 containerd[1481]: 2026-03-04 01:04:16.251 [WARNING][5964] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--cccfbd5cf--whcgp-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"883fc4b1-6269-44ab-9fb7-38da3bb836eb", ResourceVersion:"1348", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 2, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cef8018ce80220dcc841999f685037e2bc80a0ff0cb2887de1c7be826e404538", Pod:"goldmane-cccfbd5cf-whcgp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6414439ad9a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:04:16.319241 containerd[1481]: 2026-03-04 01:04:16.251 [INFO][5964] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0" Mar 4 01:04:16.319241 containerd[1481]: 2026-03-04 01:04:16.251 [INFO][5964] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0" iface="eth0" netns="" Mar 4 01:04:16.319241 containerd[1481]: 2026-03-04 01:04:16.251 [INFO][5964] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0" Mar 4 01:04:16.319241 containerd[1481]: 2026-03-04 01:04:16.251 [INFO][5964] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0" Mar 4 01:04:16.319241 containerd[1481]: 2026-03-04 01:04:16.294 [INFO][5972] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0" HandleID="k8s-pod-network.01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0" Workload="localhost-k8s-goldmane--cccfbd5cf--whcgp-eth0" Mar 4 01:04:16.319241 containerd[1481]: 2026-03-04 01:04:16.294 [INFO][5972] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:04:16.319241 containerd[1481]: 2026-03-04 01:04:16.294 [INFO][5972] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:04:16.319241 containerd[1481]: 2026-03-04 01:04:16.306 [WARNING][5972] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0" HandleID="k8s-pod-network.01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0" Workload="localhost-k8s-goldmane--cccfbd5cf--whcgp-eth0" Mar 4 01:04:16.319241 containerd[1481]: 2026-03-04 01:04:16.306 [INFO][5972] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0" HandleID="k8s-pod-network.01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0" Workload="localhost-k8s-goldmane--cccfbd5cf--whcgp-eth0" Mar 4 01:04:16.319241 containerd[1481]: 2026-03-04 01:04:16.309 [INFO][5972] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:04:16.319241 containerd[1481]: 2026-03-04 01:04:16.313 [INFO][5964] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0" Mar 4 01:04:16.320073 containerd[1481]: time="2026-03-04T01:04:16.319274850Z" level=info msg="TearDown network for sandbox \"01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0\" successfully" Mar 4 01:04:16.328015 containerd[1481]: time="2026-03-04T01:04:16.327720985Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 01:04:16.328015 containerd[1481]: time="2026-03-04T01:04:16.327855283Z" level=info msg="RemovePodSandbox \"01a60bd03b4502029a958d503e7e8b4f8150ad4b250ac2e192ca131447dcefd0\" returns successfully" Mar 4 01:04:16.328656 containerd[1481]: time="2026-03-04T01:04:16.328617913Z" level=info msg="StopPodSandbox for \"cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80\"" Mar 4 01:04:16.480908 containerd[1481]: 2026-03-04 01:04:16.402 [WARNING][5991] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rs6f4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"24812854-f0ac-4651-986c-4d61a0df5440", ResourceVersion:"1364", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 2, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"58cf5fa474792a227a4c7a31eae3a9f24b46a5b1bd46e276a8f761ceb3210e3a", Pod:"csi-node-driver-rs6f4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif0cc479e32f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:04:16.480908 containerd[1481]: 2026-03-04 01:04:16.403 [INFO][5991] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80" Mar 4 01:04:16.480908 containerd[1481]: 2026-03-04 01:04:16.403 [INFO][5991] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80" iface="eth0" netns="" Mar 4 01:04:16.480908 containerd[1481]: 2026-03-04 01:04:16.403 [INFO][5991] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80" Mar 4 01:04:16.480908 containerd[1481]: 2026-03-04 01:04:16.403 [INFO][5991] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80" Mar 4 01:04:16.480908 containerd[1481]: 2026-03-04 01:04:16.450 [INFO][5999] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80" HandleID="k8s-pod-network.cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80" Workload="localhost-k8s-csi--node--driver--rs6f4-eth0" Mar 4 01:04:16.480908 containerd[1481]: 2026-03-04 01:04:16.451 [INFO][5999] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:04:16.480908 containerd[1481]: 2026-03-04 01:04:16.452 [INFO][5999] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:04:16.480908 containerd[1481]: 2026-03-04 01:04:16.467 [WARNING][5999] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80" HandleID="k8s-pod-network.cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80" Workload="localhost-k8s-csi--node--driver--rs6f4-eth0" Mar 4 01:04:16.480908 containerd[1481]: 2026-03-04 01:04:16.467 [INFO][5999] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80" HandleID="k8s-pod-network.cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80" Workload="localhost-k8s-csi--node--driver--rs6f4-eth0" Mar 4 01:04:16.480908 containerd[1481]: 2026-03-04 01:04:16.472 [INFO][5999] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:04:16.480908 containerd[1481]: 2026-03-04 01:04:16.477 [INFO][5991] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80" Mar 4 01:04:16.480908 containerd[1481]: time="2026-03-04T01:04:16.480750106Z" level=info msg="TearDown network for sandbox \"cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80\" successfully" Mar 4 01:04:16.480908 containerd[1481]: time="2026-03-04T01:04:16.480783758Z" level=info msg="StopPodSandbox for \"cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80\" returns successfully" Mar 4 01:04:16.481946 containerd[1481]: time="2026-03-04T01:04:16.481736087Z" level=info msg="RemovePodSandbox for \"cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80\"" Mar 4 01:04:16.481946 containerd[1481]: time="2026-03-04T01:04:16.481786751Z" level=info msg="Forcibly stopping sandbox \"cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80\"" Mar 4 01:04:16.641596 containerd[1481]: 2026-03-04 01:04:16.568 [WARNING][6017] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rs6f4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"24812854-f0ac-4651-986c-4d61a0df5440", ResourceVersion:"1364", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 2, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"58cf5fa474792a227a4c7a31eae3a9f24b46a5b1bd46e276a8f761ceb3210e3a", Pod:"csi-node-driver-rs6f4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif0cc479e32f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:04:16.641596 containerd[1481]: 2026-03-04 01:04:16.569 [INFO][6017] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80" Mar 4 01:04:16.641596 containerd[1481]: 2026-03-04 01:04:16.569 [INFO][6017] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80" iface="eth0" netns="" Mar 4 01:04:16.641596 containerd[1481]: 2026-03-04 01:04:16.569 [INFO][6017] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80" Mar 4 01:04:16.641596 containerd[1481]: 2026-03-04 01:04:16.569 [INFO][6017] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80" Mar 4 01:04:16.641596 containerd[1481]: 2026-03-04 01:04:16.622 [INFO][6026] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80" HandleID="k8s-pod-network.cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80" Workload="localhost-k8s-csi--node--driver--rs6f4-eth0" Mar 4 01:04:16.641596 containerd[1481]: 2026-03-04 01:04:16.622 [INFO][6026] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:04:16.641596 containerd[1481]: 2026-03-04 01:04:16.623 [INFO][6026] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:04:16.641596 containerd[1481]: 2026-03-04 01:04:16.632 [WARNING][6026] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80" HandleID="k8s-pod-network.cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80" Workload="localhost-k8s-csi--node--driver--rs6f4-eth0" Mar 4 01:04:16.641596 containerd[1481]: 2026-03-04 01:04:16.632 [INFO][6026] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80" HandleID="k8s-pod-network.cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80" Workload="localhost-k8s-csi--node--driver--rs6f4-eth0" Mar 4 01:04:16.641596 containerd[1481]: 2026-03-04 01:04:16.635 [INFO][6026] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:04:16.641596 containerd[1481]: 2026-03-04 01:04:16.637 [INFO][6017] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80" Mar 4 01:04:16.642544 containerd[1481]: time="2026-03-04T01:04:16.641655498Z" level=info msg="TearDown network for sandbox \"cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80\" successfully" Mar 4 01:04:16.646750 containerd[1481]: time="2026-03-04T01:04:16.646648569Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 01:04:16.646750 containerd[1481]: time="2026-03-04T01:04:16.646744826Z" level=info msg="RemovePodSandbox \"cdd1c763273939c49c8120c77e00623d5b083e84b3f7472a889d76d4d79a0c80\" returns successfully" Mar 4 01:04:16.648156 containerd[1481]: time="2026-03-04T01:04:16.647806709Z" level=info msg="StopPodSandbox for \"98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb\"" Mar 4 01:04:16.771223 containerd[1481]: 2026-03-04 01:04:16.715 [WARNING][6044] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66b64679fb--6fgtf-eth0", GenerateName:"calico-apiserver-66b64679fb-", Namespace:"calico-system", SelfLink:"", UID:"9c0b93c3-34c4-4c8f-bfc1-54f9448d999f", ResourceVersion:"1324", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 2, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66b64679fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c52ba9bdffc8e51b15c764b6d03439696a9e4f44d02ef7154b012feb8c72e1d6", Pod:"calico-apiserver-66b64679fb-6fgtf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali4c64575a74f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:04:16.771223 containerd[1481]: 2026-03-04 01:04:16.716 [INFO][6044] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb" Mar 4 01:04:16.771223 containerd[1481]: 2026-03-04 01:04:16.716 [INFO][6044] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb" iface="eth0" netns="" Mar 4 01:04:16.771223 containerd[1481]: 2026-03-04 01:04:16.716 [INFO][6044] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb" Mar 4 01:04:16.771223 containerd[1481]: 2026-03-04 01:04:16.716 [INFO][6044] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb" Mar 4 01:04:16.771223 containerd[1481]: 2026-03-04 01:04:16.751 [INFO][6053] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb" HandleID="k8s-pod-network.98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb" Workload="localhost-k8s-calico--apiserver--66b64679fb--6fgtf-eth0" Mar 4 01:04:16.771223 containerd[1481]: 2026-03-04 01:04:16.751 [INFO][6053] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:04:16.771223 containerd[1481]: 2026-03-04 01:04:16.751 [INFO][6053] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:04:16.771223 containerd[1481]: 2026-03-04 01:04:16.759 [WARNING][6053] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb" HandleID="k8s-pod-network.98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb" Workload="localhost-k8s-calico--apiserver--66b64679fb--6fgtf-eth0" Mar 4 01:04:16.771223 containerd[1481]: 2026-03-04 01:04:16.759 [INFO][6053] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb" HandleID="k8s-pod-network.98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb" Workload="localhost-k8s-calico--apiserver--66b64679fb--6fgtf-eth0" Mar 4 01:04:16.771223 containerd[1481]: 2026-03-04 01:04:16.763 [INFO][6053] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:04:16.771223 containerd[1481]: 2026-03-04 01:04:16.767 [INFO][6044] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb" Mar 4 01:04:16.771223 containerd[1481]: time="2026-03-04T01:04:16.771094271Z" level=info msg="TearDown network for sandbox \"98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb\" successfully" Mar 4 01:04:16.771223 containerd[1481]: time="2026-03-04T01:04:16.771123335Z" level=info msg="StopPodSandbox for \"98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb\" returns successfully" Mar 4 01:04:16.772167 containerd[1481]: time="2026-03-04T01:04:16.772104234Z" level=info msg="RemovePodSandbox for \"98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb\"" Mar 4 01:04:16.772216 containerd[1481]: time="2026-03-04T01:04:16.772167190Z" level=info msg="Forcibly stopping sandbox \"98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb\"" Mar 4 01:04:16.906118 containerd[1481]: 2026-03-04 01:04:16.845 [WARNING][6070] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66b64679fb--6fgtf-eth0", GenerateName:"calico-apiserver-66b64679fb-", Namespace:"calico-system", SelfLink:"", UID:"9c0b93c3-34c4-4c8f-bfc1-54f9448d999f", ResourceVersion:"1324", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 2, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66b64679fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c52ba9bdffc8e51b15c764b6d03439696a9e4f44d02ef7154b012feb8c72e1d6", Pod:"calico-apiserver-66b64679fb-6fgtf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali4c64575a74f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:04:16.906118 containerd[1481]: 2026-03-04 01:04:16.845 [INFO][6070] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb" Mar 4 01:04:16.906118 containerd[1481]: 2026-03-04 01:04:16.845 [INFO][6070] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb" iface="eth0" netns="" Mar 4 01:04:16.906118 containerd[1481]: 2026-03-04 01:04:16.845 [INFO][6070] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb" Mar 4 01:04:16.906118 containerd[1481]: 2026-03-04 01:04:16.845 [INFO][6070] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb" Mar 4 01:04:16.906118 containerd[1481]: 2026-03-04 01:04:16.886 [INFO][6080] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb" HandleID="k8s-pod-network.98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb" Workload="localhost-k8s-calico--apiserver--66b64679fb--6fgtf-eth0" Mar 4 01:04:16.906118 containerd[1481]: 2026-03-04 01:04:16.886 [INFO][6080] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:04:16.906118 containerd[1481]: 2026-03-04 01:04:16.886 [INFO][6080] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:04:16.906118 containerd[1481]: 2026-03-04 01:04:16.896 [WARNING][6080] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb" HandleID="k8s-pod-network.98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb" Workload="localhost-k8s-calico--apiserver--66b64679fb--6fgtf-eth0" Mar 4 01:04:16.906118 containerd[1481]: 2026-03-04 01:04:16.896 [INFO][6080] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb" HandleID="k8s-pod-network.98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb" Workload="localhost-k8s-calico--apiserver--66b64679fb--6fgtf-eth0" Mar 4 01:04:16.906118 containerd[1481]: 2026-03-04 01:04:16.899 [INFO][6080] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:04:16.906118 containerd[1481]: 2026-03-04 01:04:16.902 [INFO][6070] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb" Mar 4 01:04:16.907333 containerd[1481]: time="2026-03-04T01:04:16.906175305Z" level=info msg="TearDown network for sandbox \"98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb\" successfully" Mar 4 01:04:16.912084 containerd[1481]: time="2026-03-04T01:04:16.911832673Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 01:04:16.912084 containerd[1481]: time="2026-03-04T01:04:16.911941814Z" level=info msg="RemovePodSandbox \"98ee0042a2107a34100a5df8fde0620b618824618733d470c25f04b3f7399fbb\" returns successfully" Mar 4 01:04:16.912987 containerd[1481]: time="2026-03-04T01:04:16.912911302Z" level=info msg="StopPodSandbox for \"1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93\"" Mar 4 01:04:17.037946 containerd[1481]: 2026-03-04 01:04:16.972 [WARNING][6096] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93" WorkloadEndpoint="localhost-k8s-whisker--87f89c9c8--fnddc-eth0" Mar 4 01:04:17.037946 containerd[1481]: 2026-03-04 01:04:16.973 [INFO][6096] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93" Mar 4 01:04:17.037946 containerd[1481]: 2026-03-04 01:04:16.973 [INFO][6096] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93" iface="eth0" netns="" Mar 4 01:04:17.037946 containerd[1481]: 2026-03-04 01:04:16.973 [INFO][6096] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93" Mar 4 01:04:17.037946 containerd[1481]: 2026-03-04 01:04:16.973 [INFO][6096] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93" Mar 4 01:04:17.037946 containerd[1481]: 2026-03-04 01:04:17.016 [INFO][6104] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93" HandleID="k8s-pod-network.1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93" Workload="localhost-k8s-whisker--87f89c9c8--fnddc-eth0" Mar 4 01:04:17.037946 containerd[1481]: 2026-03-04 01:04:17.016 [INFO][6104] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:04:17.037946 containerd[1481]: 2026-03-04 01:04:17.016 [INFO][6104] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:04:17.037946 containerd[1481]: 2026-03-04 01:04:17.025 [WARNING][6104] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93" HandleID="k8s-pod-network.1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93" Workload="localhost-k8s-whisker--87f89c9c8--fnddc-eth0" Mar 4 01:04:17.037946 containerd[1481]: 2026-03-04 01:04:17.025 [INFO][6104] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93" HandleID="k8s-pod-network.1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93" Workload="localhost-k8s-whisker--87f89c9c8--fnddc-eth0" Mar 4 01:04:17.037946 containerd[1481]: 2026-03-04 01:04:17.029 [INFO][6104] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:04:17.037946 containerd[1481]: 2026-03-04 01:04:17.033 [INFO][6096] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93" Mar 4 01:04:17.037946 containerd[1481]: time="2026-03-04T01:04:17.037783849Z" level=info msg="TearDown network for sandbox \"1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93\" successfully" Mar 4 01:04:17.037946 containerd[1481]: time="2026-03-04T01:04:17.037822962Z" level=info msg="StopPodSandbox for \"1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93\" returns successfully" Mar 4 01:04:17.039072 containerd[1481]: time="2026-03-04T01:04:17.038927981Z" level=info msg="RemovePodSandbox for \"1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93\"" Mar 4 01:04:17.039072 containerd[1481]: time="2026-03-04T01:04:17.039005313Z" level=info msg="Forcibly stopping sandbox \"1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93\"" Mar 4 01:04:17.164961 containerd[1481]: 2026-03-04 01:04:17.101 [WARNING][6121] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93" WorkloadEndpoint="localhost-k8s-whisker--87f89c9c8--fnddc-eth0" Mar 4 01:04:17.164961 containerd[1481]: 2026-03-04 01:04:17.102 [INFO][6121] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93" Mar 4 01:04:17.164961 containerd[1481]: 2026-03-04 01:04:17.102 [INFO][6121] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93" iface="eth0" netns="" Mar 4 01:04:17.164961 containerd[1481]: 2026-03-04 01:04:17.102 [INFO][6121] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93" Mar 4 01:04:17.164961 containerd[1481]: 2026-03-04 01:04:17.102 [INFO][6121] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93" Mar 4 01:04:17.164961 containerd[1481]: 2026-03-04 01:04:17.145 [INFO][6129] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93" HandleID="k8s-pod-network.1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93" Workload="localhost-k8s-whisker--87f89c9c8--fnddc-eth0" Mar 4 01:04:17.164961 containerd[1481]: 2026-03-04 01:04:17.145 [INFO][6129] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:04:17.164961 containerd[1481]: 2026-03-04 01:04:17.146 [INFO][6129] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:04:17.164961 containerd[1481]: 2026-03-04 01:04:17.154 [WARNING][6129] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93" HandleID="k8s-pod-network.1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93" Workload="localhost-k8s-whisker--87f89c9c8--fnddc-eth0" Mar 4 01:04:17.164961 containerd[1481]: 2026-03-04 01:04:17.155 [INFO][6129] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93" HandleID="k8s-pod-network.1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93" Workload="localhost-k8s-whisker--87f89c9c8--fnddc-eth0" Mar 4 01:04:17.164961 containerd[1481]: 2026-03-04 01:04:17.157 [INFO][6129] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:04:17.164961 containerd[1481]: 2026-03-04 01:04:17.161 [INFO][6121] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93" Mar 4 01:04:17.165596 containerd[1481]: time="2026-03-04T01:04:17.164986548Z" level=info msg="TearDown network for sandbox \"1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93\" successfully" Mar 4 01:04:17.172794 containerd[1481]: time="2026-03-04T01:04:17.172657558Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 01:04:17.173023 containerd[1481]: time="2026-03-04T01:04:17.172924611Z" level=info msg="RemovePodSandbox \"1c92daa97daf1ce6d8d801cadbe870645a0549b517caa55713a6911890800a93\" returns successfully" Mar 4 01:04:17.174137 containerd[1481]: time="2026-03-04T01:04:17.174054548Z" level=info msg="StopPodSandbox for \"bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58\"" Mar 4 01:04:17.325567 containerd[1481]: 2026-03-04 01:04:17.248 [WARNING][6146] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6dbf4f54f5--flgh7-eth0", GenerateName:"calico-kube-controllers-6dbf4f54f5-", Namespace:"calico-system", SelfLink:"", UID:"28010a71-1727-4efe-b343-de6e69fbd281", ResourceVersion:"1315", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 2, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6dbf4f54f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5c51a434e0c14b8b73a1cb97d58470255d891cf4da2baba50416ff8fa3e8496d", Pod:"calico-kube-controllers-6dbf4f54f5-flgh7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2af97af437d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:04:17.325567 containerd[1481]: 2026-03-04 01:04:17.249 [INFO][6146] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58" Mar 4 01:04:17.325567 containerd[1481]: 2026-03-04 01:04:17.249 [INFO][6146] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58" iface="eth0" netns="" Mar 4 01:04:17.325567 containerd[1481]: 2026-03-04 01:04:17.249 [INFO][6146] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58" Mar 4 01:04:17.325567 containerd[1481]: 2026-03-04 01:04:17.249 [INFO][6146] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58" Mar 4 01:04:17.325567 containerd[1481]: 2026-03-04 01:04:17.290 [INFO][6154] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58" HandleID="k8s-pod-network.bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58" Workload="localhost-k8s-calico--kube--controllers--6dbf4f54f5--flgh7-eth0" Mar 4 01:04:17.325567 containerd[1481]: 2026-03-04 01:04:17.291 [INFO][6154] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:04:17.325567 containerd[1481]: 2026-03-04 01:04:17.291 [INFO][6154] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:04:17.325567 containerd[1481]: 2026-03-04 01:04:17.302 [WARNING][6154] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58" HandleID="k8s-pod-network.bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58" Workload="localhost-k8s-calico--kube--controllers--6dbf4f54f5--flgh7-eth0" Mar 4 01:04:17.325567 containerd[1481]: 2026-03-04 01:04:17.302 [INFO][6154] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58" HandleID="k8s-pod-network.bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58" Workload="localhost-k8s-calico--kube--controllers--6dbf4f54f5--flgh7-eth0" Mar 4 01:04:17.325567 containerd[1481]: 2026-03-04 01:04:17.305 [INFO][6154] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:04:17.325567 containerd[1481]: 2026-03-04 01:04:17.309 [INFO][6146] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58" Mar 4 01:04:17.330683 containerd[1481]: time="2026-03-04T01:04:17.325922495Z" level=info msg="TearDown network for sandbox \"bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58\" successfully" Mar 4 01:04:17.330683 containerd[1481]: time="2026-03-04T01:04:17.326483914Z" level=info msg="StopPodSandbox for \"bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58\" returns successfully" Mar 4 01:04:17.335596 containerd[1481]: time="2026-03-04T01:04:17.335559351Z" level=info msg="RemovePodSandbox for \"bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58\"" Mar 4 01:04:17.336040 containerd[1481]: time="2026-03-04T01:04:17.335863935Z" level=info msg="Forcibly stopping sandbox \"bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58\"" Mar 4 01:04:17.535922 containerd[1481]: 2026-03-04 01:04:17.435 [WARNING][6171] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6dbf4f54f5--flgh7-eth0", GenerateName:"calico-kube-controllers-6dbf4f54f5-", Namespace:"calico-system", SelfLink:"", UID:"28010a71-1727-4efe-b343-de6e69fbd281", ResourceVersion:"1315", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 2, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6dbf4f54f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5c51a434e0c14b8b73a1cb97d58470255d891cf4da2baba50416ff8fa3e8496d", Pod:"calico-kube-controllers-6dbf4f54f5-flgh7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2af97af437d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:04:17.535922 containerd[1481]: 2026-03-04 01:04:17.436 [INFO][6171] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58" Mar 4 01:04:17.535922 containerd[1481]: 2026-03-04 01:04:17.436 [INFO][6171] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58" iface="eth0" netns="" Mar 4 01:04:17.535922 containerd[1481]: 2026-03-04 01:04:17.436 [INFO][6171] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58" Mar 4 01:04:17.535922 containerd[1481]: 2026-03-04 01:04:17.436 [INFO][6171] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58" Mar 4 01:04:17.535922 containerd[1481]: 2026-03-04 01:04:17.503 [INFO][6185] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58" HandleID="k8s-pod-network.bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58" Workload="localhost-k8s-calico--kube--controllers--6dbf4f54f5--flgh7-eth0" Mar 4 01:04:17.535922 containerd[1481]: 2026-03-04 01:04:17.503 [INFO][6185] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:04:17.535922 containerd[1481]: 2026-03-04 01:04:17.503 [INFO][6185] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:04:17.535922 containerd[1481]: 2026-03-04 01:04:17.517 [WARNING][6185] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58" HandleID="k8s-pod-network.bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58" Workload="localhost-k8s-calico--kube--controllers--6dbf4f54f5--flgh7-eth0" Mar 4 01:04:17.535922 containerd[1481]: 2026-03-04 01:04:17.517 [INFO][6185] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58" HandleID="k8s-pod-network.bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58" Workload="localhost-k8s-calico--kube--controllers--6dbf4f54f5--flgh7-eth0" Mar 4 01:04:17.535922 containerd[1481]: 2026-03-04 01:04:17.527 [INFO][6185] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:04:17.535922 containerd[1481]: 2026-03-04 01:04:17.531 [INFO][6171] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58" Mar 4 01:04:17.535922 containerd[1481]: time="2026-03-04T01:04:17.535625399Z" level=info msg="TearDown network for sandbox \"bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58\" successfully" Mar 4 01:04:17.665762 containerd[1481]: time="2026-03-04T01:04:17.665498941Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 01:04:17.665762 containerd[1481]: time="2026-03-04T01:04:17.665625605Z" level=info msg="RemovePodSandbox \"bd2134d1a165b28e59003b8faec434d6bdd9d7449192dd549f9025a2591f5b58\" returns successfully" Mar 4 01:04:17.666663 containerd[1481]: time="2026-03-04T01:04:17.666596422Z" level=info msg="StopPodSandbox for \"549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22\"" Mar 4 01:04:17.842752 containerd[1481]: 2026-03-04 01:04:17.748 [WARNING][6203] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--svjm8-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"0199c151-24ad-4cf2-ae91-b7e9b350322f", ResourceVersion:"1258", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 2, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b04db128a8691221c848b483526635d5775d26559f74d00e007013b89e7510e9", Pod:"coredns-66bc5c9577-svjm8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6e98f0db6aa", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:04:17.842752 containerd[1481]: 2026-03-04 01:04:17.748 [INFO][6203] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22" Mar 4 01:04:17.842752 containerd[1481]: 2026-03-04 01:04:17.748 [INFO][6203] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22" iface="eth0" netns="" Mar 4 01:04:17.842752 containerd[1481]: 2026-03-04 01:04:17.748 [INFO][6203] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22" Mar 4 01:04:17.842752 containerd[1481]: 2026-03-04 01:04:17.748 [INFO][6203] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22" Mar 4 01:04:17.842752 containerd[1481]: 2026-03-04 01:04:17.797 [INFO][6212] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22" HandleID="k8s-pod-network.549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22" Workload="localhost-k8s-coredns--66bc5c9577--svjm8-eth0" Mar 4 01:04:17.842752 containerd[1481]: 2026-03-04 01:04:17.797 [INFO][6212] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:04:17.842752 containerd[1481]: 2026-03-04 01:04:17.798 [INFO][6212] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:04:17.842752 containerd[1481]: 2026-03-04 01:04:17.826 [WARNING][6212] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22" HandleID="k8s-pod-network.549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22" Workload="localhost-k8s-coredns--66bc5c9577--svjm8-eth0" Mar 4 01:04:17.842752 containerd[1481]: 2026-03-04 01:04:17.827 [INFO][6212] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22" HandleID="k8s-pod-network.549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22" Workload="localhost-k8s-coredns--66bc5c9577--svjm8-eth0" Mar 4 01:04:17.842752 containerd[1481]: 2026-03-04 01:04:17.832 [INFO][6212] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:04:17.842752 containerd[1481]: 2026-03-04 01:04:17.837 [INFO][6203] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22" Mar 4 01:04:17.842752 containerd[1481]: time="2026-03-04T01:04:17.842456490Z" level=info msg="TearDown network for sandbox \"549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22\" successfully" Mar 4 01:04:17.842752 containerd[1481]: time="2026-03-04T01:04:17.842500101Z" level=info msg="StopPodSandbox for \"549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22\" returns successfully" Mar 4 01:04:17.844704 containerd[1481]: time="2026-03-04T01:04:17.844119596Z" level=info msg="RemovePodSandbox for \"549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22\"" Mar 4 01:04:17.844704 containerd[1481]: time="2026-03-04T01:04:17.844166362Z" level=info msg="Forcibly stopping sandbox \"549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22\"" Mar 4 01:04:18.082162 containerd[1481]: 2026-03-04 01:04:17.974 [WARNING][6229] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--svjm8-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"0199c151-24ad-4cf2-ae91-b7e9b350322f", ResourceVersion:"1258", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 2, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b04db128a8691221c848b483526635d5775d26559f74d00e007013b89e7510e9", Pod:"coredns-66bc5c9577-svjm8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6e98f0db6aa", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:04:18.082162 containerd[1481]: 2026-03-04 01:04:17.974 [INFO][6229] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22" Mar 4 01:04:18.082162 containerd[1481]: 2026-03-04 01:04:17.974 [INFO][6229] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22" iface="eth0" netns="" Mar 4 01:04:18.082162 containerd[1481]: 2026-03-04 01:04:17.974 [INFO][6229] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22" Mar 4 01:04:18.082162 containerd[1481]: 2026-03-04 01:04:17.974 [INFO][6229] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22" Mar 4 01:04:18.082162 containerd[1481]: 2026-03-04 01:04:18.062 [INFO][6237] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22" HandleID="k8s-pod-network.549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22" Workload="localhost-k8s-coredns--66bc5c9577--svjm8-eth0" Mar 4 01:04:18.082162 containerd[1481]: 2026-03-04 01:04:18.063 [INFO][6237] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:04:18.082162 containerd[1481]: 2026-03-04 01:04:18.063 [INFO][6237] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:04:18.082162 containerd[1481]: 2026-03-04 01:04:18.071 [WARNING][6237] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22" HandleID="k8s-pod-network.549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22" Workload="localhost-k8s-coredns--66bc5c9577--svjm8-eth0" Mar 4 01:04:18.082162 containerd[1481]: 2026-03-04 01:04:18.071 [INFO][6237] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22" HandleID="k8s-pod-network.549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22" Workload="localhost-k8s-coredns--66bc5c9577--svjm8-eth0" Mar 4 01:04:18.082162 containerd[1481]: 2026-03-04 01:04:18.074 [INFO][6237] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:04:18.082162 containerd[1481]: 2026-03-04 01:04:18.078 [INFO][6229] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22" Mar 4 01:04:18.083877 containerd[1481]: time="2026-03-04T01:04:18.082237512Z" level=info msg="TearDown network for sandbox \"549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22\" successfully" Mar 4 01:04:18.090016 containerd[1481]: time="2026-03-04T01:04:18.089797173Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 01:04:18.090712 containerd[1481]: time="2026-03-04T01:04:18.090035024Z" level=info msg="RemovePodSandbox \"549d64174d3cb3e4ba07029c8712ccd4e11e967cfb30cd280d03f9c44c32cb22\" returns successfully" Mar 4 01:04:20.581944 systemd[1]: Started sshd@12-10.0.0.50:22-10.0.0.1:50744.service - OpenSSH per-connection server daemon (10.0.0.1:50744). Mar 4 01:04:20.642518 sshd[6246]: Accepted publickey for core from 10.0.0.1 port 50744 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:04:20.645185 sshd[6246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:04:20.651585 systemd-logind[1450]: New session 13 of user core. Mar 4 01:04:20.656660 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 4 01:04:20.833978 sshd[6246]: pam_unix(sshd:session): session closed for user core Mar 4 01:04:20.840500 systemd[1]: sshd@12-10.0.0.50:22-10.0.0.1:50744.service: Deactivated successfully. Mar 4 01:04:20.843263 systemd[1]: session-13.scope: Deactivated successfully. Mar 4 01:04:20.845759 systemd-logind[1450]: Session 13 logged out. Waiting for processes to exit. Mar 4 01:04:20.848047 systemd-logind[1450]: Removed session 13. Mar 4 01:04:22.624138 kubelet[2579]: E0304 01:04:22.624092 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:04:25.854113 systemd[1]: Started sshd@13-10.0.0.50:22-10.0.0.1:50758.service - OpenSSH per-connection server daemon (10.0.0.1:50758). Mar 4 01:04:25.897779 sshd[6293]: Accepted publickey for core from 10.0.0.1 port 50758 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:04:25.899628 sshd[6293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:04:25.905046 systemd-logind[1450]: New session 14 of user core. Mar 4 01:04:25.911531 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 4 01:04:26.059846 sshd[6293]: pam_unix(sshd:session): session closed for user core Mar 4 01:04:26.072077 systemd[1]: sshd@13-10.0.0.50:22-10.0.0.1:50758.service: Deactivated successfully. Mar 4 01:04:26.074157 systemd[1]: session-14.scope: Deactivated successfully. Mar 4 01:04:26.076226 systemd-logind[1450]: Session 14 logged out. Waiting for processes to exit. Mar 4 01:04:26.082721 systemd[1]: Started sshd@14-10.0.0.50:22-10.0.0.1:50774.service - OpenSSH per-connection server daemon (10.0.0.1:50774). Mar 4 01:04:26.083860 systemd-logind[1450]: Removed session 14. Mar 4 01:04:26.118298 sshd[6308]: Accepted publickey for core from 10.0.0.1 port 50774 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:04:26.120225 sshd[6308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:04:26.126536 systemd-logind[1450]: New session 15 of user core. Mar 4 01:04:26.135566 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 4 01:04:26.345443 sshd[6308]: pam_unix(sshd:session): session closed for user core Mar 4 01:04:26.356249 systemd[1]: sshd@14-10.0.0.50:22-10.0.0.1:50774.service: Deactivated successfully. Mar 4 01:04:26.360221 systemd[1]: session-15.scope: Deactivated successfully. Mar 4 01:04:26.364729 systemd-logind[1450]: Session 15 logged out. Waiting for processes to exit. Mar 4 01:04:26.379309 systemd[1]: Started sshd@15-10.0.0.50:22-10.0.0.1:50784.service - OpenSSH per-connection server daemon (10.0.0.1:50784). Mar 4 01:04:26.406744 systemd-logind[1450]: Removed session 15. Mar 4 01:04:26.449481 sshd[6320]: Accepted publickey for core from 10.0.0.1 port 50784 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:04:26.452448 sshd[6320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:04:26.461406 systemd-logind[1450]: New session 16 of user core. Mar 4 01:04:26.477679 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 4 01:04:26.728931 sshd[6320]: pam_unix(sshd:session): session closed for user core Mar 4 01:04:26.734119 systemd[1]: sshd@15-10.0.0.50:22-10.0.0.1:50784.service: Deactivated successfully. Mar 4 01:04:26.737532 systemd[1]: session-16.scope: Deactivated successfully. Mar 4 01:04:26.738909 systemd-logind[1450]: Session 16 logged out. Waiting for processes to exit. Mar 4 01:04:26.741160 systemd-logind[1450]: Removed session 16. Mar 4 01:04:31.834641 systemd[1]: Started sshd@16-10.0.0.50:22-10.0.0.1:40964.service - OpenSSH per-connection server daemon (10.0.0.1:40964). Mar 4 01:04:32.859430 sshd[6335]: Accepted publickey for core from 10.0.0.1 port 40964 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:04:33.157582 sshd[6335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:04:33.377653 systemd-logind[1450]: New session 17 of user core. Mar 4 01:04:33.437652 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 4 01:04:35.528042 sshd[6335]: pam_unix(sshd:session): session closed for user core Mar 4 01:04:35.544643 systemd[1]: sshd@16-10.0.0.50:22-10.0.0.1:40964.service: Deactivated successfully. Mar 4 01:04:35.558111 systemd[1]: session-17.scope: Deactivated successfully. Mar 4 01:04:35.574976 systemd-logind[1450]: Session 17 logged out. Waiting for processes to exit. Mar 4 01:04:35.582269 systemd-logind[1450]: Removed session 17. Mar 4 01:04:40.582125 systemd[1]: Started sshd@17-10.0.0.50:22-10.0.0.1:55826.service - OpenSSH per-connection server daemon (10.0.0.1:55826). Mar 4 01:04:40.696143 sshd[6429]: Accepted publickey for core from 10.0.0.1 port 55826 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:04:40.699715 sshd[6429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:04:40.721540 systemd-logind[1450]: New session 18 of user core. Mar 4 01:04:40.729664 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 4 01:04:41.365751 sshd[6429]: pam_unix(sshd:session): session closed for user core Mar 4 01:04:41.383792 systemd[1]: sshd@17-10.0.0.50:22-10.0.0.1:55826.service: Deactivated successfully. Mar 4 01:04:41.391466 systemd[1]: session-18.scope: Deactivated successfully. Mar 4 01:04:41.394762 systemd-logind[1450]: Session 18 logged out. Waiting for processes to exit. Mar 4 01:04:41.411774 systemd-logind[1450]: Removed session 18. Mar 4 01:04:46.483925 systemd[1]: Started sshd@18-10.0.0.50:22-10.0.0.1:55834.service - OpenSSH per-connection server daemon (10.0.0.1:55834). Mar 4 01:04:46.574045 sshd[6481]: Accepted publickey for core from 10.0.0.1 port 55834 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:04:46.577906 sshd[6481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:04:46.591931 systemd-logind[1450]: New session 19 of user core. Mar 4 01:04:46.600887 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 4 01:04:47.162141 sshd[6481]: pam_unix(sshd:session): session closed for user core Mar 4 01:04:47.213577 systemd[1]: sshd@18-10.0.0.50:22-10.0.0.1:55834.service: Deactivated successfully. Mar 4 01:04:47.253662 systemd[1]: session-19.scope: Deactivated successfully. Mar 4 01:04:47.262027 systemd-logind[1450]: Session 19 logged out. Waiting for processes to exit. Mar 4 01:04:47.310106 systemd[1]: Started sshd@19-10.0.0.50:22-10.0.0.1:55838.service - OpenSSH per-connection server daemon (10.0.0.1:55838). Mar 4 01:04:47.316543 systemd-logind[1450]: Removed session 19. Mar 4 01:04:47.423582 sshd[6495]: Accepted publickey for core from 10.0.0.1 port 55838 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:04:47.426220 sshd[6495]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:04:47.466798 systemd-logind[1450]: New session 20 of user core. Mar 4 01:04:47.481487 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 4 01:04:48.194503 sshd[6495]: pam_unix(sshd:session): session closed for user core Mar 4 01:04:48.213726 systemd[1]: Started sshd@20-10.0.0.50:22-10.0.0.1:55848.service - OpenSSH per-connection server daemon (10.0.0.1:55848). Mar 4 01:04:48.215076 systemd[1]: sshd@19-10.0.0.50:22-10.0.0.1:55838.service: Deactivated successfully. Mar 4 01:04:48.224620 systemd[1]: session-20.scope: Deactivated successfully. Mar 4 01:04:48.233087 systemd-logind[1450]: Session 20 logged out. Waiting for processes to exit. Mar 4 01:04:48.260980 systemd-logind[1450]: Removed session 20. Mar 4 01:04:49.775855 sshd[6506]: Accepted publickey for core from 10.0.0.1 port 55848 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:04:49.783684 sshd[6506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:04:49.834794 systemd-logind[1450]: New session 21 of user core. Mar 4 01:04:49.840964 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 4 01:04:50.632724 kubelet[2579]: E0304 01:04:50.632437 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:04:51.541886 sshd[6506]: pam_unix(sshd:session): session closed for user core Mar 4 01:04:51.550027 systemd[1]: sshd@20-10.0.0.50:22-10.0.0.1:55848.service: Deactivated successfully. Mar 4 01:04:51.559209 systemd[1]: session-21.scope: Deactivated successfully. Mar 4 01:04:51.560985 systemd[1]: session-21.scope: Consumed 1.633s CPU time. Mar 4 01:04:51.563692 systemd-logind[1450]: Session 21 logged out. Waiting for processes to exit. Mar 4 01:04:51.571845 systemd[1]: Started sshd@21-10.0.0.50:22-10.0.0.1:51430.service - OpenSSH per-connection server daemon (10.0.0.1:51430). Mar 4 01:04:51.574005 systemd-logind[1450]: Removed session 21. Mar 4 01:04:51.636145 sshd[6534]: Accepted publickey for core from 10.0.0.1 port 51430 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:04:51.638504 sshd[6534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:04:51.644404 systemd-logind[1450]: New session 22 of user core. Mar 4 01:04:51.649609 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 4 01:04:52.140925 sshd[6534]: pam_unix(sshd:session): session closed for user core Mar 4 01:04:52.154035 systemd[1]: sshd@21-10.0.0.50:22-10.0.0.1:51430.service: Deactivated successfully. Mar 4 01:04:52.160234 systemd[1]: session-22.scope: Deactivated successfully. Mar 4 01:04:52.164243 systemd-logind[1450]: Session 22 logged out. Waiting for processes to exit. Mar 4 01:04:52.175329 systemd[1]: Started sshd@22-10.0.0.50:22-10.0.0.1:51444.service - OpenSSH per-connection server daemon (10.0.0.1:51444). Mar 4 01:04:52.179314 systemd-logind[1450]: Removed session 22. Mar 4 01:04:52.243337 sshd[6548]: Accepted publickey for core from 10.0.0.1 port 51444 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:04:52.246253 sshd[6548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:04:52.253500 systemd-logind[1450]: New session 23 of user core. Mar 4 01:04:52.265690 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 4 01:04:52.439876 sshd[6548]: pam_unix(sshd:session): session closed for user core Mar 4 01:04:52.445292 systemd[1]: sshd@22-10.0.0.50:22-10.0.0.1:51444.service: Deactivated successfully. Mar 4 01:04:52.448801 systemd[1]: session-23.scope: Deactivated successfully. Mar 4 01:04:52.453740 systemd-logind[1450]: Session 23 logged out. Waiting for processes to exit. Mar 4 01:04:52.457262 systemd-logind[1450]: Removed session 23. Mar 4 01:04:57.467142 systemd[1]: Started sshd@23-10.0.0.50:22-10.0.0.1:51460.service - OpenSSH per-connection server daemon (10.0.0.1:51460). Mar 4 01:04:57.511273 sshd[6570]: Accepted publickey for core from 10.0.0.1 port 51460 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:04:57.514409 sshd[6570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:04:57.522223 systemd-logind[1450]: New session 24 of user core. Mar 4 01:04:57.535655 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 4 01:04:57.714833 sshd[6570]: pam_unix(sshd:session): session closed for user core Mar 4 01:04:57.723500 systemd[1]: sshd@23-10.0.0.50:22-10.0.0.1:51460.service: Deactivated successfully. Mar 4 01:04:57.726439 systemd[1]: session-24.scope: Deactivated successfully. Mar 4 01:04:57.728617 systemd-logind[1450]: Session 24 logged out. Waiting for processes to exit. Mar 4 01:04:57.730326 systemd-logind[1450]: Removed session 24. Mar 4 01:04:58.622021 kubelet[2579]: E0304 01:04:58.621822 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:04:59.659990 kubelet[2579]: E0304 01:04:59.659590 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:05:02.748078 systemd[1]: Started sshd@24-10.0.0.50:22-10.0.0.1:51242.service - OpenSSH per-connection server daemon (10.0.0.1:51242). Mar 4 01:05:02.866992 sshd[6586]: Accepted publickey for core from 10.0.0.1 port 51242 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:05:02.870690 sshd[6586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:05:02.887797 systemd-logind[1450]: New session 25 of user core. Mar 4 01:05:02.897798 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 4 01:05:03.346576 sshd[6586]: pam_unix(sshd:session): session closed for user core Mar 4 01:05:03.353757 systemd-logind[1450]: Session 25 logged out. Waiting for processes to exit. Mar 4 01:05:03.357292 systemd[1]: sshd@24-10.0.0.50:22-10.0.0.1:51242.service: Deactivated successfully. Mar 4 01:05:03.364706 systemd[1]: session-25.scope: Deactivated successfully. Mar 4 01:05:03.383101 systemd-logind[1450]: Removed session 25. Mar 4 01:05:06.622495 kubelet[2579]: E0304 01:05:06.622300 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:05:13.786003 systemd[1]: Started sshd@25-10.0.0.50:22-10.0.0.1:59330.service - OpenSSH per-connection server daemon (10.0.0.1:59330). Mar 4 01:05:16.757644 kubelet[2579]: E0304 01:05:16.748281 2579 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:05:17.803655 sshd[6631]: Accepted publickey for core from 10.0.0.1 port 59330 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:05:17.829648 sshd[6631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:05:17.834592 kubelet[2579]: E0304 01:05:17.834319 2579 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.934s" Mar 4 01:05:18.141338 systemd-logind[1450]: New session 26 of user core. Mar 4 01:05:18.148028 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 4 01:05:19.279220 sshd[6631]: pam_unix(sshd:session): session closed for user core Mar 4 01:05:19.285340 systemd[1]: sshd@25-10.0.0.50:22-10.0.0.1:59330.service: Deactivated successfully. Mar 4 01:05:19.293472 systemd[1]: session-26.scope: Deactivated successfully. Mar 4 01:05:19.296896 systemd-logind[1450]: Session 26 logged out. Waiting for processes to exit. Mar 4 01:05:19.299881 systemd-logind[1450]: Removed session 26.