Mar 12 01:34:49.163439 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Mar 11 23:23:33 -00 2026 Mar 12 01:34:49.163470 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=0e4243d51ac00bffbb09a606c7378a821ca08f30dbebc6b82c4452fcc120d7bc Mar 12 01:34:49.163565 kernel: BIOS-provided physical RAM map: Mar 12 01:34:49.163572 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 12 01:34:49.163577 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 12 01:34:49.163583 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 12 01:34:49.163589 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 12 01:34:49.163596 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 12 01:34:49.163601 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 12 01:34:49.163609 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 12 01:34:49.163615 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 12 01:34:49.163621 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 12 01:34:49.163626 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 12 01:34:49.163632 kernel: NX (Execute Disable) protection: active Mar 12 01:34:49.163638 kernel: APIC: Static calls initialized Mar 12 01:34:49.163646 kernel: SMBIOS 2.8 present. Mar 12 01:34:49.163652 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 12 01:34:49.163658 kernel: Hypervisor detected: KVM Mar 12 01:34:49.163664 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 12 01:34:49.163670 kernel: kvm-clock: using sched offset of 10287898701 cycles Mar 12 01:34:49.163676 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 12 01:34:49.163682 kernel: tsc: Detected 2445.426 MHz processor Mar 12 01:34:49.163689 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 12 01:34:49.163696 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 12 01:34:49.163704 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 12 01:34:49.163710 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 12 01:34:49.163716 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 12 01:34:49.163722 kernel: Using GB pages for direct mapping Mar 12 01:34:49.163728 kernel: ACPI: Early table checksum verification disabled Mar 12 01:34:49.163737 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 12 01:34:49.163743 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:34:49.163749 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:34:49.163755 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:34:49.163764 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 12 01:34:49.163770 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:34:49.163776 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:34:49.163782 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:34:49.163788 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:34:49.163794 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 12 01:34:49.163800 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 12 01:34:49.163809 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 12 01:34:49.163818 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 12 01:34:49.163824 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 12 01:34:49.163831 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 12 01:34:49.163837 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 12 01:34:49.163843 kernel: No NUMA configuration found Mar 12 01:34:49.163849 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 12 01:34:49.163858 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Mar 12 01:34:49.163864 kernel: Zone ranges: Mar 12 01:34:49.163870 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 12 01:34:49.163877 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 12 01:34:49.163883 kernel: Normal empty Mar 12 01:34:49.163889 kernel: Movable zone start for each node Mar 12 01:34:49.163895 kernel: Early memory node ranges Mar 12 01:34:49.163901 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 12 01:34:49.163907 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 12 01:34:49.163913 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 12 01:34:49.163923 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 12 01:34:49.163929 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 12 01:34:49.163935 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 12 01:34:49.163941 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 12 01:34:49.163947 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 12 01:34:49.163953 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 12 01:34:49.163960 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 12 01:34:49.163966 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 12 01:34:49.163972 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 12 01:34:49.163981 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 12 01:34:49.163987 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 12 01:34:49.163993 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 12 01:34:49.163999 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 12 01:34:49.164005 kernel: TSC deadline timer available Mar 12 01:34:49.164012 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 12 01:34:49.164018 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 12 01:34:49.164024 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 12 01:34:49.164030 kernel: kvm-guest: setup PV sched yield Mar 12 01:34:49.164039 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 12 01:34:49.164045 kernel: Booting paravirtualized kernel on KVM Mar 12 01:34:49.164051 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 12 01:34:49.164058 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 12 01:34:49.164064 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 12 01:34:49.164070 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 12 01:34:49.164076 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 12 01:34:49.164082 kernel: kvm-guest: PV spinlocks enabled Mar 12 01:34:49.164088 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 12 01:34:49.164098 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=0e4243d51ac00bffbb09a606c7378a821ca08f30dbebc6b82c4452fcc120d7bc Mar 12 01:34:49.164104 kernel: random: crng init done Mar 12 01:34:49.164110 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 12 01:34:49.164116 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 12 01:34:49.164123 kernel: Fallback order for Node 0: 0 Mar 12 01:34:49.164129 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Mar 12 01:34:49.164135 kernel: Policy zone: DMA32 Mar 12 01:34:49.164141 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 12 01:34:49.164150 kernel: Memory: 2434608K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 136884K reserved, 0K cma-reserved) Mar 12 01:34:49.164156 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 12 01:34:49.164162 kernel: ftrace: allocating 37996 entries in 149 pages Mar 12 01:34:49.164169 kernel: ftrace: allocated 149 pages with 4 groups Mar 12 01:34:49.164175 kernel: Dynamic Preempt: voluntary Mar 12 01:34:49.164181 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 12 01:34:49.164188 kernel: rcu: RCU event tracing is enabled. Mar 12 01:34:49.164225 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 12 01:34:49.164232 kernel: Trampoline variant of Tasks RCU enabled. Mar 12 01:34:49.164241 kernel: Rude variant of Tasks RCU enabled. Mar 12 01:34:49.164248 kernel: Tracing variant of Tasks RCU enabled. Mar 12 01:34:49.164254 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 12 01:34:49.164260 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 12 01:34:49.164266 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 12 01:34:49.164273 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 12 01:34:49.164279 kernel: Console: colour VGA+ 80x25 Mar 12 01:34:49.164285 kernel: printk: console [ttyS0] enabled Mar 12 01:34:49.164291 kernel: ACPI: Core revision 20230628 Mar 12 01:34:49.164300 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 12 01:34:49.164306 kernel: APIC: Switch to symmetric I/O mode setup Mar 12 01:34:49.164312 kernel: x2apic enabled Mar 12 01:34:49.164318 kernel: APIC: Switched APIC routing to: physical x2apic Mar 12 01:34:49.164325 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 12 01:34:49.164331 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 12 01:34:49.164337 kernel: kvm-guest: setup PV IPIs Mar 12 01:34:49.164344 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 12 01:34:49.164380 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 12 01:34:49.164387 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 12 01:34:49.164394 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 12 01:34:49.164400 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 12 01:34:49.164409 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 12 01:34:49.164416 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 12 01:34:49.164422 kernel: Spectre V2 : Mitigation: Retpolines Mar 12 01:34:49.164429 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 12 01:34:49.164436 kernel: Speculative Store Bypass: Vulnerable Mar 12 01:34:49.164445 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 12 01:34:49.164452 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 12 01:34:49.164530 kernel: active return thunk: srso_alias_return_thunk Mar 12 01:34:49.164545 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 12 01:34:49.164557 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 12 01:34:49.164569 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 12 01:34:49.164633 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 12 01:34:49.164642 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 12 01:34:49.164653 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 12 01:34:49.164660 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 12 01:34:49.164667 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 12 01:34:49.164673 kernel: Freeing SMP alternatives memory: 32K Mar 12 01:34:49.164680 kernel: pid_max: default: 32768 minimum: 301 Mar 12 01:34:49.164687 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 12 01:34:49.164693 kernel: landlock: Up and running. Mar 12 01:34:49.164700 kernel: SELinux: Initializing. Mar 12 01:34:49.164706 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 12 01:34:49.164754 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 12 01:34:49.164762 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 12 01:34:49.164768 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 12 01:34:49.164775 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 12 01:34:49.164782 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 12 01:34:49.164788 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 12 01:34:49.164795 kernel: signal: max sigframe size: 1776 Mar 12 01:34:49.164801 kernel: rcu: Hierarchical SRCU implementation. Mar 12 01:34:49.164808 kernel: rcu: Max phase no-delay instances is 400. Mar 12 01:34:49.164818 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 12 01:34:49.164825 kernel: smp: Bringing up secondary CPUs ... Mar 12 01:34:49.164831 kernel: smpboot: x86: Booting SMP configuration: Mar 12 01:34:49.164838 kernel: .... node #0, CPUs: #1 #2 #3 Mar 12 01:34:49.164844 kernel: smp: Brought up 1 node, 4 CPUs Mar 12 01:34:49.164851 kernel: smpboot: Max logical packages: 1 Mar 12 01:34:49.164857 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 12 01:34:49.164864 kernel: devtmpfs: initialized Mar 12 01:34:49.164870 kernel: x86/mm: Memory block size: 128MB Mar 12 01:34:49.164879 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 12 01:34:49.164886 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 12 01:34:49.164893 kernel: pinctrl core: initialized pinctrl subsystem Mar 12 01:34:49.164900 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 12 01:34:49.164912 kernel: audit: initializing netlink subsys (disabled) Mar 12 01:34:49.164925 kernel: audit: type=2000 audit(1773279286.862:1): state=initialized audit_enabled=0 res=1 Mar 12 01:34:49.164937 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 12 01:34:49.164949 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 12 01:34:49.164961 kernel: cpuidle: using governor menu Mar 12 01:34:49.164973 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 12 01:34:49.164980 kernel: dca service started, version 1.12.1 Mar 12 01:34:49.164986 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 12 01:34:49.164995 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 12 01:34:49.165007 kernel: PCI: Using configuration type 1 for base access Mar 12 01:34:49.165019 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 12 01:34:49.165030 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 12 01:34:49.165040 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 12 01:34:49.165046 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 12 01:34:49.165056 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 12 01:34:49.165067 kernel: ACPI: Added _OSI(Module Device) Mar 12 01:34:49.165079 kernel: ACPI: Added _OSI(Processor Device) Mar 12 01:34:49.165090 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 12 01:34:49.165103 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 12 01:34:49.165116 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 12 01:34:49.165129 kernel: ACPI: Interpreter enabled Mar 12 01:34:49.165141 kernel: ACPI: PM: (supports S0 S3 S5) Mar 12 01:34:49.165152 kernel: ACPI: Using IOAPIC for interrupt routing Mar 12 01:34:49.165170 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 12 01:34:49.165182 kernel: PCI: Using E820 reservations for host bridge windows Mar 12 01:34:49.165233 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 12 01:34:49.165248 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 12 01:34:49.165671 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 12 01:34:49.165854 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 12 01:34:49.166057 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 12 01:34:49.166084 kernel: PCI host bridge to bus 0000:00 Mar 12 01:34:49.166307 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 12 01:34:49.166469 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 12 01:34:49.166680 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 12 01:34:49.166828 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 12 01:34:49.166959 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 12 01:34:49.167073 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 12 01:34:49.167246 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 12 01:34:49.167425 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 12 01:34:49.167731 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 12 01:34:49.167886 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 12 01:34:49.168051 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 12 01:34:49.168262 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 12 01:34:49.168403 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 12 01:34:49.168637 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 12 01:34:49.168763 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 12 01:34:49.168895 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 12 01:34:49.169061 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 12 01:34:49.169276 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 12 01:34:49.169427 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Mar 12 01:34:49.169687 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 12 01:34:49.169854 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 12 01:34:49.170019 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 12 01:34:49.170143 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Mar 12 01:34:49.170320 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Mar 12 01:34:49.170565 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 12 01:34:49.170713 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 12 01:34:49.170871 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 12 01:34:49.171063 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 12 01:34:49.171283 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 12 01:34:49.171435 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Mar 12 01:34:49.171672 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Mar 12 01:34:49.171806 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 12 01:34:49.171927 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 12 01:34:49.171941 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 12 01:34:49.171948 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 12 01:34:49.171955 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 12 01:34:49.171961 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 12 01:34:49.171968 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 12 01:34:49.171975 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 12 01:34:49.171981 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 12 01:34:49.171988 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 12 01:34:49.171994 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 12 01:34:49.172004 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 12 01:34:49.172010 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 12 01:34:49.172017 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 12 01:34:49.172023 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 12 01:34:49.172030 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 12 01:34:49.172036 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 12 01:34:49.172043 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 12 01:34:49.172049 kernel: iommu: Default domain type: Translated Mar 12 01:34:49.172056 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 12 01:34:49.172065 kernel: PCI: Using ACPI for IRQ routing Mar 12 01:34:49.172072 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 12 01:34:49.172079 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 12 01:34:49.172085 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 12 01:34:49.172239 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 12 01:34:49.172412 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 12 01:34:49.172722 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 12 01:34:49.172735 kernel: vgaarb: loaded Mar 12 01:34:49.172748 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 12 01:34:49.172755 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 12 01:34:49.172761 kernel: clocksource: Switched to clocksource kvm-clock Mar 12 01:34:49.172768 kernel: VFS: Disk quotas dquot_6.6.0 Mar 12 01:34:49.172775 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 12 01:34:49.172782 kernel: pnp: PnP ACPI init Mar 12 01:34:49.172918 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 12 01:34:49.172928 kernel: pnp: PnP ACPI: found 6 devices Mar 12 01:34:49.172939 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 12 01:34:49.172946 kernel: NET: Registered PF_INET protocol family Mar 12 01:34:49.172952 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 12 01:34:49.172959 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 12 01:34:49.172966 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 12 01:34:49.172972 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 12 01:34:49.172979 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 12 01:34:49.172985 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 12 01:34:49.172993 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 12 01:34:49.173010 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 12 01:34:49.173023 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 12 01:34:49.173035 kernel: NET: Registered PF_XDP protocol family Mar 12 01:34:49.173248 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 12 01:34:49.173367 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 12 01:34:49.173513 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 12 01:34:49.173631 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 12 01:34:49.173740 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 12 01:34:49.173855 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 12 01:34:49.173864 kernel: PCI: CLS 0 bytes, default 64 Mar 12 01:34:49.173871 kernel: Initialise system trusted keyrings Mar 12 01:34:49.173878 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 12 01:34:49.173890 kernel: Key type asymmetric registered Mar 12 01:34:49.173903 kernel: Asymmetric key parser 'x509' registered Mar 12 01:34:49.173915 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 12 01:34:49.173928 kernel: io scheduler mq-deadline registered Mar 12 01:34:49.173940 kernel: io scheduler kyber registered Mar 12 01:34:49.173958 kernel: io scheduler bfq registered Mar 12 01:34:49.173970 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 12 01:34:49.173983 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 12 01:34:49.173996 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 12 01:34:49.174008 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 12 01:34:49.174021 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 12 01:34:49.174034 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 12 01:34:49.174047 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 12 01:34:49.174060 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 12 01:34:49.174078 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 12 01:34:49.174346 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 12 01:34:49.174360 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 12 01:34:49.174517 kernel: rtc_cmos 00:04: registered as rtc0 Mar 12 01:34:49.174640 kernel: rtc_cmos 00:04: setting system clock to 2026-03-12T01:34:48 UTC (1773279288) Mar 12 01:34:49.174754 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 12 01:34:49.174763 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 12 01:34:49.174770 kernel: NET: Registered PF_INET6 protocol family Mar 12 01:34:49.174781 kernel: Segment Routing with IPv6 Mar 12 01:34:49.174789 kernel: In-situ OAM (IOAM) with IPv6 Mar 12 01:34:49.174795 kernel: NET: Registered PF_PACKET protocol family Mar 12 01:34:49.174802 kernel: Key type dns_resolver registered Mar 12 01:34:49.174809 kernel: IPI shorthand broadcast: enabled Mar 12 01:34:49.174815 kernel: sched_clock: Marking stable (1532027798, 604647822)->(2477458672, -340783052) Mar 12 01:34:49.174822 kernel: registered taskstats version 1 Mar 12 01:34:49.174828 kernel: Loading compiled-in X.509 certificates Mar 12 01:34:49.174835 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 67287262975845098ef9f337a0e8baa9afd38510' Mar 12 01:34:49.174844 kernel: Key type .fscrypt registered Mar 12 01:34:49.174851 kernel: Key type fscrypt-provisioning registered Mar 12 01:34:49.174857 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 12 01:34:49.174864 kernel: ima: Allocated hash algorithm: sha1 Mar 12 01:34:49.174870 kernel: ima: No architecture policies found Mar 12 01:34:49.174877 kernel: clk: Disabling unused clocks Mar 12 01:34:49.174883 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 12 01:34:49.174890 kernel: Write protecting the kernel read-only data: 36864k Mar 12 01:34:49.174897 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 12 01:34:49.174906 kernel: Run /init as init process Mar 12 01:34:49.174912 kernel: with arguments: Mar 12 01:34:49.174919 kernel: /init Mar 12 01:34:49.174925 kernel: with environment: Mar 12 01:34:49.174932 kernel: HOME=/ Mar 12 01:34:49.174938 kernel: TERM=linux Mar 12 01:34:49.174947 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 12 01:34:49.174956 systemd[1]: Detected virtualization kvm. Mar 12 01:34:49.174966 systemd[1]: Detected architecture x86-64. Mar 12 01:34:49.174972 systemd[1]: Running in initrd. Mar 12 01:34:49.174979 systemd[1]: No hostname configured, using default hostname. Mar 12 01:34:49.174986 systemd[1]: Hostname set to . Mar 12 01:34:49.174993 systemd[1]: Initializing machine ID from VM UUID. Mar 12 01:34:49.175000 systemd[1]: Queued start job for default target initrd.target. Mar 12 01:34:49.175007 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 01:34:49.175014 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 01:34:49.175024 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 12 01:34:49.175034 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 12 01:34:49.175048 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 12 01:34:49.175062 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 12 01:34:49.175077 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 12 01:34:49.175091 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 12 01:34:49.175105 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 01:34:49.175123 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 12 01:34:49.175137 systemd[1]: Reached target paths.target - Path Units. Mar 12 01:34:49.175151 systemd[1]: Reached target slices.target - Slice Units. Mar 12 01:34:49.175165 systemd[1]: Reached target swap.target - Swaps. Mar 12 01:34:49.175227 systemd[1]: Reached target timers.target - Timer Units. Mar 12 01:34:49.175245 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 12 01:34:49.175263 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 12 01:34:49.175276 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 12 01:34:49.175288 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 12 01:34:49.175296 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 12 01:34:49.175310 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 12 01:34:49.175323 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 01:34:49.175330 systemd[1]: Reached target sockets.target - Socket Units. Mar 12 01:34:49.175338 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 12 01:34:49.175345 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 12 01:34:49.175355 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 12 01:34:49.175362 systemd[1]: Starting systemd-fsck-usr.service... Mar 12 01:34:49.175369 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 12 01:34:49.175377 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 12 01:34:49.175384 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 01:34:49.175414 systemd-journald[195]: Collecting audit messages is disabled. Mar 12 01:34:49.175435 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 12 01:34:49.175443 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 01:34:49.175450 systemd-journald[195]: Journal started Mar 12 01:34:49.175469 systemd-journald[195]: Runtime Journal (/run/log/journal/44df8121c96c4e55a568f4d153c999cc) is 6.0M, max 48.4M, 42.3M free. Mar 12 01:34:49.180676 systemd[1]: Started systemd-journald.service - Journal Service. Mar 12 01:34:49.184741 systemd[1]: Finished systemd-fsck-usr.service. Mar 12 01:34:49.191258 systemd-modules-load[196]: Inserted module 'overlay' Mar 12 01:34:49.191709 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 12 01:34:49.210860 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 12 01:34:49.369612 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 12 01:34:49.369648 kernel: Bridge firewalling registered Mar 12 01:34:49.240437 systemd-modules-load[196]: Inserted module 'br_netfilter' Mar 12 01:34:49.372347 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 12 01:34:49.380108 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 01:34:49.387718 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 12 01:34:49.397934 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 01:34:49.426792 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 12 01:34:49.428369 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 12 01:34:49.433725 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 12 01:34:49.444403 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 01:34:49.449538 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 12 01:34:49.462051 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 01:34:49.469756 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 12 01:34:49.478354 dracut-cmdline[228]: dracut-dracut-053 Mar 12 01:34:49.478354 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=0e4243d51ac00bffbb09a606c7378a821ca08f30dbebc6b82c4452fcc120d7bc Mar 12 01:34:49.481729 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 12 01:34:49.551771 systemd-resolved[244]: Positive Trust Anchors: Mar 12 01:34:49.551816 systemd-resolved[244]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 12 01:34:49.551861 systemd-resolved[244]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 12 01:34:49.555907 systemd-resolved[244]: Defaulting to hostname 'linux'. Mar 12 01:34:49.557709 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 12 01:34:49.564378 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 12 01:34:49.609032 kernel: SCSI subsystem initialized Mar 12 01:34:49.620592 kernel: Loading iSCSI transport class v2.0-870. Mar 12 01:34:49.635584 kernel: iscsi: registered transport (tcp) Mar 12 01:34:49.658103 kernel: iscsi: registered transport (qla4xxx) Mar 12 01:34:49.658246 kernel: QLogic iSCSI HBA Driver Mar 12 01:34:49.723580 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 12 01:34:49.743901 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 12 01:34:49.776576 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 12 01:34:49.776652 kernel: device-mapper: uevent: version 1.0.3 Mar 12 01:34:49.779580 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 12 01:34:49.830600 kernel: raid6: avx2x4 gen() 21036 MB/s Mar 12 01:34:49.848568 kernel: raid6: avx2x2 gen() 19470 MB/s Mar 12 01:34:49.868434 kernel: raid6: avx2x1 gen() 11273 MB/s Mar 12 01:34:49.868543 kernel: raid6: using algorithm avx2x4 gen() 21036 MB/s Mar 12 01:34:49.888371 kernel: raid6: .... xor() 4857 MB/s, rmw enabled Mar 12 01:34:49.888429 kernel: raid6: using avx2x2 recovery algorithm Mar 12 01:34:49.916538 kernel: xor: automatically using best checksumming function avx Mar 12 01:34:50.099599 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 12 01:34:50.123388 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 12 01:34:50.140804 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 01:34:50.155770 systemd-udevd[415]: Using default interface naming scheme 'v255'. Mar 12 01:34:50.162738 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 01:34:50.166648 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 12 01:34:50.193993 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Mar 12 01:34:50.240579 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 12 01:34:50.255047 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 12 01:34:50.347674 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 01:34:50.361816 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 12 01:34:50.385870 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 12 01:34:50.393683 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 12 01:34:50.416112 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 12 01:34:50.400341 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 01:34:50.410655 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 12 01:34:50.430571 kernel: cryptd: max_cpu_qlen set to 1000 Mar 12 01:34:50.432734 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 12 01:34:50.438166 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 12 01:34:50.452809 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 12 01:34:50.452863 kernel: GPT:9289727 != 19775487 Mar 12 01:34:50.452878 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 12 01:34:50.452900 kernel: GPT:9289727 != 19775487 Mar 12 01:34:50.454147 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 12 01:34:50.454515 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 12 01:34:50.462470 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 12 01:34:50.454714 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 01:34:50.472049 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 12 01:34:50.476314 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 01:34:50.476723 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 01:34:50.481150 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 01:34:50.491164 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 01:34:50.496870 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 12 01:34:50.519522 kernel: libata version 3.00 loaded. Mar 12 01:34:50.528847 kernel: AVX2 version of gcm_enc/dec engaged. Mar 12 01:34:50.528897 kernel: ahci 0000:00:1f.2: version 3.0 Mar 12 01:34:50.529104 kernel: AES CTR mode by8 optimization enabled Mar 12 01:34:50.529115 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 12 01:34:50.534570 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 12 01:34:50.534826 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 12 01:34:50.538526 kernel: scsi host0: ahci Mar 12 01:34:50.538916 kernel: scsi host1: ahci Mar 12 01:34:50.542538 kernel: scsi host2: ahci Mar 12 01:34:50.542805 kernel: scsi host3: ahci Mar 12 01:34:50.543541 kernel: scsi host4: ahci Mar 12 01:34:50.547775 kernel: scsi host5: ahci Mar 12 01:34:50.548036 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Mar 12 01:34:50.548064 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Mar 12 01:34:50.548082 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Mar 12 01:34:50.548099 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Mar 12 01:34:50.548116 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Mar 12 01:34:50.548129 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Mar 12 01:34:50.574533 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (478) Mar 12 01:34:50.574603 kernel: BTRFS: device fsid 94537345-7f6b-4b2a-965f-248bd6f0b7eb devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (485) Mar 12 01:34:50.592942 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 12 01:34:50.727550 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 01:34:50.739523 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 12 01:34:50.743827 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 12 01:34:50.753623 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 12 01:34:50.766530 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 12 01:34:50.786791 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 12 01:34:50.796191 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 12 01:34:50.813132 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 12 01:34:50.813165 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 12 01:34:50.813184 disk-uuid[557]: Primary Header is updated. Mar 12 01:34:50.813184 disk-uuid[557]: Secondary Entries is updated. Mar 12 01:34:50.813184 disk-uuid[557]: Secondary Header is updated. Mar 12 01:34:50.822374 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 12 01:34:50.825066 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 01:34:50.861547 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 12 01:34:50.872070 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 12 01:34:50.872126 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 12 01:34:50.872144 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 12 01:34:50.878385 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 12 01:34:50.878423 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 12 01:34:50.878442 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 12 01:34:50.884405 kernel: ata3.00: applying bridge limits Mar 12 01:34:50.887153 kernel: ata3.00: configured for UDMA/100 Mar 12 01:34:50.894579 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 12 01:34:50.961750 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 12 01:34:50.962052 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 12 01:34:50.987585 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 12 01:34:51.820594 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 12 01:34:51.820994 disk-uuid[559]: The operation has completed successfully. Mar 12 01:34:51.861398 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 12 01:34:51.861618 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 12 01:34:51.896781 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 12 01:34:51.904465 sh[598]: Success Mar 12 01:34:51.922656 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 12 01:34:51.971578 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 12 01:34:51.991916 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 12 01:34:52.001667 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 12 01:34:52.016425 kernel: BTRFS info (device dm-0): first mount of filesystem 94537345-7f6b-4b2a-965f-248bd6f0b7eb Mar 12 01:34:52.016531 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 12 01:34:52.016553 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 12 01:34:52.019292 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 12 01:34:52.021279 kernel: BTRFS info (device dm-0): using free space tree Mar 12 01:34:52.031576 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 12 01:34:52.032469 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 12 01:34:52.045821 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 12 01:34:52.052706 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 12 01:34:52.068281 kernel: BTRFS info (device vda6): first mount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 01:34:52.068326 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 12 01:34:52.068345 kernel: BTRFS info (device vda6): using free space tree Mar 12 01:34:52.075530 kernel: BTRFS info (device vda6): auto enabling async discard Mar 12 01:34:52.089075 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 12 01:34:52.094435 kernel: BTRFS info (device vda6): last unmount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 01:34:52.106570 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 12 01:34:52.122845 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 12 01:34:52.227057 ignition[688]: Ignition 2.19.0 Mar 12 01:34:52.227088 ignition[688]: Stage: fetch-offline Mar 12 01:34:52.227137 ignition[688]: no configs at "/usr/lib/ignition/base.d" Mar 12 01:34:52.227154 ignition[688]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:34:52.227314 ignition[688]: parsed url from cmdline: "" Mar 12 01:34:52.227320 ignition[688]: no config URL provided Mar 12 01:34:52.227328 ignition[688]: reading system config file "/usr/lib/ignition/user.ign" Mar 12 01:34:52.227340 ignition[688]: no config at "/usr/lib/ignition/user.ign" Mar 12 01:34:52.227386 ignition[688]: op(1): [started] loading QEMU firmware config module Mar 12 01:34:52.227393 ignition[688]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 12 01:34:52.236431 ignition[688]: op(1): [finished] loading QEMU firmware config module Mar 12 01:34:52.236463 ignition[688]: QEMU firmware config was not found. Ignoring... Mar 12 01:34:52.335042 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 12 01:34:52.350737 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 12 01:34:52.387788 systemd-networkd[786]: lo: Link UP Mar 12 01:34:52.387822 systemd-networkd[786]: lo: Gained carrier Mar 12 01:34:52.390299 systemd-networkd[786]: Enumeration completed Mar 12 01:34:52.391157 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 12 01:34:52.391389 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 01:34:52.391395 systemd-networkd[786]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 12 01:34:52.393062 systemd-networkd[786]: eth0: Link UP Mar 12 01:34:52.393067 systemd-networkd[786]: eth0: Gained carrier Mar 12 01:34:52.393079 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 01:34:52.411982 systemd[1]: Reached target network.target - Network. Mar 12 01:34:52.439918 systemd-networkd[786]: eth0: DHCPv4 address 10.0.0.111/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 12 01:34:52.476756 ignition[688]: parsing config with SHA512: 764945e498cde33d2fccc9926a3dbc22b9c5617f1c93f53274565dc9cbcfa769be9c8c98ae803d59f62dc4f8c42613b48dfad6bde480241ab9f0f620733b6099 Mar 12 01:34:52.484635 unknown[688]: fetched base config from "system" Mar 12 01:34:52.484659 unknown[688]: fetched user config from "qemu" Mar 12 01:34:52.489253 ignition[688]: fetch-offline: fetch-offline passed Mar 12 01:34:52.489379 ignition[688]: Ignition finished successfully Mar 12 01:34:52.500830 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 12 01:34:52.509157 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 12 01:34:52.519309 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 12 01:34:52.550717 ignition[790]: Ignition 2.19.0 Mar 12 01:34:52.550740 ignition[790]: Stage: kargs Mar 12 01:34:52.550907 ignition[790]: no configs at "/usr/lib/ignition/base.d" Mar 12 01:34:52.550918 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:34:52.551910 ignition[790]: kargs: kargs passed Mar 12 01:34:52.551959 ignition[790]: Ignition finished successfully Mar 12 01:34:52.566013 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 12 01:34:52.585814 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 12 01:34:52.607137 ignition[797]: Ignition 2.19.0 Mar 12 01:34:52.607170 ignition[797]: Stage: disks Mar 12 01:34:52.607532 ignition[797]: no configs at "/usr/lib/ignition/base.d" Mar 12 01:34:52.611127 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 12 01:34:52.607552 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:34:52.615176 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 12 01:34:52.608755 ignition[797]: disks: disks passed Mar 12 01:34:52.621403 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 12 01:34:52.608818 ignition[797]: Ignition finished successfully Mar 12 01:34:52.627243 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 12 01:34:52.630787 systemd[1]: Reached target sysinit.target - System Initialization. Mar 12 01:34:52.637920 systemd[1]: Reached target basic.target - Basic System. Mar 12 01:34:52.653879 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 12 01:34:52.674976 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 12 01:34:52.680902 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 12 01:34:52.682357 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 12 01:34:52.822552 kernel: EXT4-fs (vda9): mounted filesystem f90926b1-4cc2-4a2d-8c45-4ec584c98779 r/w with ordered data mode. Quota mode: none. Mar 12 01:34:52.822748 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 12 01:34:52.825945 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 12 01:34:52.843708 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 12 01:34:52.848360 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 12 01:34:52.865253 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (815) Mar 12 01:34:52.865284 kernel: BTRFS info (device vda6): first mount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 01:34:52.865301 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 12 01:34:52.854803 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 12 01:34:52.886629 kernel: BTRFS info (device vda6): using free space tree Mar 12 01:34:52.886662 kernel: BTRFS info (device vda6): auto enabling async discard Mar 12 01:34:52.854871 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 12 01:34:52.854907 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 12 01:34:52.879359 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 12 01:34:52.886743 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 12 01:34:52.910750 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 12 01:34:52.965039 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Mar 12 01:34:52.973448 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Mar 12 01:34:52.983094 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Mar 12 01:34:52.992253 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Mar 12 01:34:53.145717 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 12 01:34:53.162826 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 12 01:34:53.171755 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 12 01:34:53.177399 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 12 01:34:53.187469 kernel: BTRFS info (device vda6): last unmount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 01:34:53.207751 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 12 01:34:53.221572 ignition[929]: INFO : Ignition 2.19.0 Mar 12 01:34:53.221572 ignition[929]: INFO : Stage: mount Mar 12 01:34:53.226999 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 01:34:53.226999 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:34:53.226999 ignition[929]: INFO : mount: mount passed Mar 12 01:34:53.226999 ignition[929]: INFO : Ignition finished successfully Mar 12 01:34:53.241045 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 12 01:34:53.259699 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 12 01:34:53.271366 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 12 01:34:53.286537 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (942) Mar 12 01:34:53.290589 kernel: BTRFS info (device vda6): first mount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 01:34:53.290614 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 12 01:34:53.294522 kernel: BTRFS info (device vda6): using free space tree Mar 12 01:34:53.300568 kernel: BTRFS info (device vda6): auto enabling async discard Mar 12 01:34:53.302123 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 12 01:34:53.335582 ignition[959]: INFO : Ignition 2.19.0 Mar 12 01:34:53.335582 ignition[959]: INFO : Stage: files Mar 12 01:34:53.341026 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 01:34:53.341026 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:34:53.341026 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Mar 12 01:34:53.341026 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 12 01:34:53.341026 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 12 01:34:53.362992 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 12 01:34:53.362992 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 12 01:34:53.362992 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 12 01:34:53.362992 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 12 01:34:53.362992 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 12 01:34:53.343849 unknown[959]: wrote ssh authorized keys file for user: core Mar 12 01:34:53.395534 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 12 01:34:53.477749 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 12 01:34:53.477749 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 12 01:34:53.488615 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 12 01:34:53.488615 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 12 01:34:53.488615 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 12 01:34:53.488615 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 12 01:34:53.488615 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 12 01:34:53.488615 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 12 01:34:53.488615 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 12 01:34:53.488615 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 12 01:34:53.488615 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 12 01:34:53.488615 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 12 01:34:53.488615 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 12 01:34:53.488615 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 12 01:34:53.488615 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Mar 12 01:34:53.545811 systemd-networkd[786]: eth0: Gained IPv6LL Mar 12 01:34:53.705361 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 12 01:34:54.237588 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 12 01:34:54.237588 ignition[959]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 12 01:34:54.249751 ignition[959]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 12 01:34:54.249751 ignition[959]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 12 01:34:54.249751 ignition[959]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 12 01:34:54.249751 ignition[959]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 12 01:34:54.249751 ignition[959]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 12 01:34:54.249751 ignition[959]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 12 01:34:54.249751 ignition[959]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 12 01:34:54.249751 ignition[959]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 12 01:34:54.308844 ignition[959]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 12 01:34:54.308844 ignition[959]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 12 01:34:54.308844 ignition[959]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 12 01:34:54.308844 ignition[959]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 12 01:34:54.308844 ignition[959]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 12 01:34:54.308844 ignition[959]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 12 01:34:54.308844 ignition[959]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 12 01:34:54.308844 ignition[959]: INFO : files: files passed Mar 12 01:34:54.308844 ignition[959]: INFO : Ignition finished successfully Mar 12 01:34:54.277596 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 12 01:34:54.308871 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 12 01:34:54.318298 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 12 01:34:54.389689 initrd-setup-root-after-ignition[986]: grep: /sysroot/oem/oem-release: No such file or directory Mar 12 01:34:54.326118 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 12 01:34:54.398108 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 12 01:34:54.398108 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 12 01:34:54.326322 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 12 01:34:54.410592 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 12 01:34:54.340718 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 12 01:34:54.346238 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 12 01:34:54.374865 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 12 01:34:54.446868 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 12 01:34:54.449826 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 12 01:34:54.458010 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 12 01:34:54.463634 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 12 01:34:54.469718 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 12 01:34:54.480724 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 12 01:34:54.498064 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 12 01:34:54.513761 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 12 01:34:54.525096 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 12 01:34:54.525392 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 01:34:54.533126 systemd[1]: Stopped target timers.target - Timer Units. Mar 12 01:34:54.540222 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 12 01:34:54.540380 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 12 01:34:54.552119 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 12 01:34:54.555837 systemd[1]: Stopped target basic.target - Basic System. Mar 12 01:34:54.562105 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 12 01:34:54.567040 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 12 01:34:54.573380 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 12 01:34:54.587857 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 12 01:34:54.588174 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 12 01:34:54.598757 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 12 01:34:54.599003 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 12 01:34:54.606585 systemd[1]: Stopped target swap.target - Swaps. Mar 12 01:34:54.615075 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 12 01:34:54.615333 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 12 01:34:54.629881 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 12 01:34:54.637706 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 01:34:54.645292 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 12 01:34:54.645453 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 01:34:54.653673 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 12 01:34:54.653871 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 12 01:34:54.661143 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 12 01:34:54.661373 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 12 01:34:54.671018 systemd[1]: Stopped target paths.target - Path Units. Mar 12 01:34:54.674256 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 12 01:34:54.679291 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 01:34:54.681091 systemd[1]: Stopped target slices.target - Slice Units. Mar 12 01:34:54.689645 systemd[1]: Stopped target sockets.target - Socket Units. Mar 12 01:34:54.699866 systemd[1]: iscsid.socket: Deactivated successfully. Mar 12 01:34:54.700022 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 12 01:34:54.705986 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 12 01:34:54.706134 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 12 01:34:54.708955 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 12 01:34:54.709131 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 12 01:34:54.716039 systemd[1]: ignition-files.service: Deactivated successfully. Mar 12 01:34:54.716237 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 12 01:34:54.731758 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 12 01:34:54.733350 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 12 01:34:54.733613 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 01:34:54.746856 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 12 01:34:54.756857 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 12 01:34:54.788692 ignition[1013]: INFO : Ignition 2.19.0 Mar 12 01:34:54.788692 ignition[1013]: INFO : Stage: umount Mar 12 01:34:54.788692 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 01:34:54.788692 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:34:54.788692 ignition[1013]: INFO : umount: umount passed Mar 12 01:34:54.788692 ignition[1013]: INFO : Ignition finished successfully Mar 12 01:34:54.757082 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 01:34:54.763424 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 12 01:34:54.763629 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 12 01:34:54.780310 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 12 01:34:54.782710 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 12 01:34:54.782885 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 12 01:34:54.789056 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 12 01:34:54.789255 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 12 01:34:54.797574 systemd[1]: Stopped target network.target - Network. Mar 12 01:34:54.801932 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 12 01:34:54.802057 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 12 01:34:54.809237 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 12 01:34:54.809311 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 12 01:34:54.816916 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 12 01:34:54.817000 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 12 01:34:54.823179 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 12 01:34:54.823298 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 12 01:34:54.830382 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 12 01:34:54.838243 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 12 01:34:54.848889 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 12 01:34:54.849147 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 12 01:34:54.849565 systemd-networkd[786]: eth0: DHCPv6 lease lost Mar 12 01:34:54.857599 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 12 01:34:54.857798 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 12 01:34:54.863986 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 12 01:34:54.864178 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 12 01:34:54.871084 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 12 01:34:54.871173 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 12 01:34:54.875119 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 12 01:34:54.875231 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 12 01:34:54.889709 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 12 01:34:54.894719 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 12 01:34:54.894816 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 12 01:34:54.900774 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 12 01:34:54.900880 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 12 01:34:54.906418 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 12 01:34:54.906647 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 12 01:34:54.914038 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 12 01:34:54.914150 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 01:34:54.920660 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 01:34:54.943537 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 12 01:34:54.943779 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 12 01:34:54.951442 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 12 01:34:54.951805 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 01:34:54.959892 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 12 01:34:54.959985 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 12 01:34:54.965868 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 12 01:34:54.965939 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 01:34:54.973934 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 12 01:34:54.974034 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 12 01:34:55.112060 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Mar 12 01:34:54.981649 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 12 01:34:54.981716 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 12 01:34:54.988454 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 12 01:34:54.988606 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 01:34:55.014926 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 12 01:34:55.021736 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 12 01:34:55.021851 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 01:34:55.029434 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 01:34:55.029589 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 01:34:55.038317 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 12 01:34:55.038545 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 12 01:34:55.045268 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 12 01:34:55.053648 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 12 01:34:55.068722 systemd[1]: Switching root. Mar 12 01:34:55.167045 systemd-journald[195]: Journal stopped Mar 12 01:34:56.531048 kernel: SELinux: policy capability network_peer_controls=1 Mar 12 01:34:56.531131 kernel: SELinux: policy capability open_perms=1 Mar 12 01:34:56.531151 kernel: SELinux: policy capability extended_socket_class=1 Mar 12 01:34:56.531166 kernel: SELinux: policy capability always_check_network=0 Mar 12 01:34:56.531188 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 12 01:34:56.531243 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 12 01:34:56.531260 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 12 01:34:56.531283 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 12 01:34:56.531301 kernel: audit: type=1403 audit(1773279295.289:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 12 01:34:56.531324 systemd[1]: Successfully loaded SELinux policy in 50.746ms. Mar 12 01:34:56.531354 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.433ms. Mar 12 01:34:56.531371 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 12 01:34:56.531387 systemd[1]: Detected virtualization kvm. Mar 12 01:34:56.531406 systemd[1]: Detected architecture x86-64. Mar 12 01:34:56.531422 systemd[1]: Detected first boot. Mar 12 01:34:56.531438 systemd[1]: Initializing machine ID from VM UUID. Mar 12 01:34:56.531453 zram_generator::config[1058]: No configuration found. Mar 12 01:34:56.531471 systemd[1]: Populated /etc with preset unit settings. Mar 12 01:34:56.531531 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 12 01:34:56.531549 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 12 01:34:56.531565 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 12 01:34:56.531592 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 12 01:34:56.531609 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 12 01:34:56.531625 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 12 01:34:56.531647 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 12 01:34:56.531663 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 12 01:34:56.531679 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 12 01:34:56.531695 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 12 01:34:56.531711 systemd[1]: Created slice user.slice - User and Session Slice. Mar 12 01:34:56.531728 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 01:34:56.531747 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 01:34:56.531764 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 12 01:34:56.531779 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 12 01:34:56.531795 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 12 01:34:56.531811 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 12 01:34:56.531827 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 12 01:34:56.531843 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 01:34:56.531859 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 12 01:34:56.531874 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 12 01:34:56.531894 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 12 01:34:56.531910 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 12 01:34:56.531927 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 01:34:56.531943 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 12 01:34:56.531959 systemd[1]: Reached target slices.target - Slice Units. Mar 12 01:34:56.531975 systemd[1]: Reached target swap.target - Swaps. Mar 12 01:34:56.531992 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 12 01:34:56.532009 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 12 01:34:56.532032 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 12 01:34:56.532054 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 12 01:34:56.532074 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 01:34:56.532098 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 12 01:34:56.532118 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 12 01:34:56.532137 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 12 01:34:56.532158 systemd[1]: Mounting media.mount - External Media Directory... Mar 12 01:34:56.532177 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 01:34:56.532198 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 12 01:34:56.532264 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 12 01:34:56.532286 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 12 01:34:56.532305 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 12 01:34:56.532328 systemd[1]: Reached target machines.target - Containers. Mar 12 01:34:56.532347 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 12 01:34:56.532367 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 01:34:56.532387 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 12 01:34:56.532407 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 12 01:34:56.532434 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 12 01:34:56.532455 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 12 01:34:56.532547 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 12 01:34:56.532580 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 12 01:34:56.532601 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 12 01:34:56.532622 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 12 01:34:56.532643 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 12 01:34:56.532666 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 12 01:34:56.532693 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 12 01:34:56.532714 systemd[1]: Stopped systemd-fsck-usr.service. Mar 12 01:34:56.532735 kernel: fuse: init (API version 7.39) Mar 12 01:34:56.532752 kernel: loop: module loaded Mar 12 01:34:56.532771 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 12 01:34:56.532790 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 12 01:34:56.532811 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 12 01:34:56.532830 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 12 01:34:56.532850 kernel: ACPI: bus type drm_connector registered Mar 12 01:34:56.532876 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 12 01:34:56.532926 systemd-journald[1139]: Collecting audit messages is disabled. Mar 12 01:34:56.532964 systemd[1]: verity-setup.service: Deactivated successfully. Mar 12 01:34:56.532986 systemd-journald[1139]: Journal started Mar 12 01:34:56.533016 systemd-journald[1139]: Runtime Journal (/run/log/journal/44df8121c96c4e55a568f4d153c999cc) is 6.0M, max 48.4M, 42.3M free. Mar 12 01:34:56.021705 systemd[1]: Queued start job for default target multi-user.target. Mar 12 01:34:56.054293 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 12 01:34:56.055036 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 12 01:34:56.055439 systemd[1]: systemd-journald.service: Consumed 1.524s CPU time. Mar 12 01:34:56.537608 systemd[1]: Stopped verity-setup.service. Mar 12 01:34:56.545548 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 01:34:56.553155 systemd[1]: Started systemd-journald.service - Journal Service. Mar 12 01:34:56.554249 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 12 01:34:56.557243 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 12 01:34:56.560120 systemd[1]: Mounted media.mount - External Media Directory. Mar 12 01:34:56.563457 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 12 01:34:56.566818 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 12 01:34:56.570537 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 12 01:34:56.573813 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 12 01:34:56.577371 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 01:34:56.581098 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 12 01:34:56.581425 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 12 01:34:56.585464 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 12 01:34:56.586051 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 12 01:34:56.590784 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 12 01:34:56.590998 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 12 01:34:56.595109 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 12 01:34:56.595454 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 12 01:34:56.600958 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 12 01:34:56.601263 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 12 01:34:56.605926 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 12 01:34:56.606247 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 12 01:34:56.610934 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 12 01:34:56.616275 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 12 01:34:56.621643 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 12 01:34:56.642040 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 12 01:34:56.652745 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 12 01:34:56.657845 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 12 01:34:56.660702 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 12 01:34:56.660765 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 12 01:34:56.664550 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 12 01:34:56.669742 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 12 01:34:56.674872 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 12 01:34:56.678346 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 01:34:56.680466 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 12 01:34:56.684570 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 12 01:34:56.688697 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 12 01:34:56.690688 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 12 01:34:56.694287 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 12 01:34:56.696826 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 12 01:34:56.705630 systemd-journald[1139]: Time spent on flushing to /var/log/journal/44df8121c96c4e55a568f4d153c999cc is 38.614ms for 941 entries. Mar 12 01:34:56.705630 systemd-journald[1139]: System Journal (/var/log/journal/44df8121c96c4e55a568f4d153c999cc) is 8.0M, max 195.6M, 187.6M free. Mar 12 01:34:56.757732 systemd-journald[1139]: Received client request to flush runtime journal. Mar 12 01:34:56.757781 kernel: loop0: detected capacity change from 0 to 142488 Mar 12 01:34:56.704622 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 12 01:34:56.716454 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 12 01:34:56.723022 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 01:34:56.728599 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 12 01:34:56.732831 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 12 01:34:56.740707 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 12 01:34:56.745608 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 12 01:34:56.755199 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 12 01:34:56.766783 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 12 01:34:56.772686 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 12 01:34:56.779074 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 12 01:34:56.786532 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 12 01:34:56.800367 udevadm[1183]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 12 01:34:56.804727 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 12 01:34:56.818780 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 12 01:34:56.821163 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 12 01:34:56.836078 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 12 01:34:56.836544 kernel: loop1: detected capacity change from 0 to 219192 Mar 12 01:34:56.847778 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 12 01:34:56.884660 kernel: loop2: detected capacity change from 0 to 140768 Mar 12 01:34:56.892033 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Mar 12 01:34:56.892060 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Mar 12 01:34:56.901708 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 01:34:56.927545 kernel: loop3: detected capacity change from 0 to 142488 Mar 12 01:34:56.948530 kernel: loop4: detected capacity change from 0 to 219192 Mar 12 01:34:56.963548 kernel: loop5: detected capacity change from 0 to 140768 Mar 12 01:34:56.980158 (sd-merge)[1196]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 12 01:34:56.981072 (sd-merge)[1196]: Merged extensions into '/usr'. Mar 12 01:34:56.989809 systemd[1]: Reloading requested from client PID 1172 ('systemd-sysext') (unit systemd-sysext.service)... Mar 12 01:34:56.989850 systemd[1]: Reloading... Mar 12 01:34:57.072561 zram_generator::config[1220]: No configuration found. Mar 12 01:34:57.136536 ldconfig[1167]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 12 01:34:57.227690 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 12 01:34:57.288910 systemd[1]: Reloading finished in 298 ms. Mar 12 01:34:57.339857 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 12 01:34:57.343932 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 12 01:34:57.348599 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 12 01:34:57.374805 systemd[1]: Starting ensure-sysext.service... Mar 12 01:34:57.378627 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 12 01:34:57.383713 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 01:34:57.389850 systemd[1]: Reloading requested from client PID 1260 ('systemctl') (unit ensure-sysext.service)... Mar 12 01:34:57.389893 systemd[1]: Reloading... Mar 12 01:34:57.407138 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 12 01:34:57.407771 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 12 01:34:57.409310 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 12 01:34:57.409793 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Mar 12 01:34:57.409961 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Mar 12 01:34:57.415295 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Mar 12 01:34:57.415328 systemd-tmpfiles[1261]: Skipping /boot Mar 12 01:34:57.433973 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Mar 12 01:34:57.433994 systemd-tmpfiles[1261]: Skipping /boot Mar 12 01:34:57.441683 systemd-udevd[1262]: Using default interface naming scheme 'v255'. Mar 12 01:34:57.456650 zram_generator::config[1288]: No configuration found. Mar 12 01:34:57.538577 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1306) Mar 12 01:34:57.615596 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 12 01:34:57.621564 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 12 01:34:57.634196 kernel: ACPI: button: Power Button [PWRF] Mar 12 01:34:57.620748 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 12 01:34:57.648568 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 12 01:34:57.656123 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 12 01:34:57.662542 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 12 01:34:57.697160 kernel: mousedev: PS/2 mouse device common for all mice Mar 12 01:34:57.738926 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 12 01:34:57.739172 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 12 01:34:57.743649 systemd[1]: Reloading finished in 353 ms. Mar 12 01:34:57.832271 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 01:34:57.834591 kernel: kvm_amd: TSC scaling supported Mar 12 01:34:57.834635 kernel: kvm_amd: Nested Virtualization enabled Mar 12 01:34:57.834649 kernel: kvm_amd: Nested Paging enabled Mar 12 01:34:57.834661 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 12 01:34:57.839007 kernel: kvm_amd: PMU virtualization is disabled Mar 12 01:34:57.882592 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 01:34:57.896567 kernel: EDAC MC: Ver: 3.0.0 Mar 12 01:34:57.914313 systemd[1]: Finished ensure-sysext.service. Mar 12 01:34:57.932407 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 12 01:34:57.948638 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 01:34:57.965063 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 12 01:34:57.971397 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 12 01:34:57.976119 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 01:34:57.980715 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 12 01:34:57.995816 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 12 01:34:58.003353 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 12 01:34:58.009039 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 12 01:34:58.009866 lvm[1368]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 12 01:34:58.014745 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 12 01:34:58.021044 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 01:34:58.024684 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 12 01:34:58.033685 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 12 01:34:58.036768 augenrules[1383]: No rules Mar 12 01:34:58.039950 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 12 01:34:58.048001 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 12 01:34:58.056757 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 12 01:34:58.063424 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 12 01:34:58.070884 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 01:34:58.074867 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 01:34:58.076686 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 12 01:34:58.081144 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 12 01:34:58.086646 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 12 01:34:58.086843 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 12 01:34:58.092176 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 12 01:34:58.092570 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 12 01:34:58.097819 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 12 01:34:58.098085 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 12 01:34:58.102935 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 12 01:34:58.103259 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 12 01:34:58.107960 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 12 01:34:58.113188 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 12 01:34:58.119346 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 12 01:34:58.139561 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 12 01:34:58.157899 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 12 01:34:58.161743 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 12 01:34:58.161833 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 12 01:34:58.163631 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 12 01:34:58.164451 lvm[1406]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 12 01:34:58.170834 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 12 01:34:58.314616 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 12 01:34:58.320040 systemd-networkd[1389]: lo: Link UP Mar 12 01:34:58.320051 systemd-networkd[1389]: lo: Gained carrier Mar 12 01:34:58.322117 systemd-networkd[1389]: Enumeration completed Mar 12 01:34:58.323998 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 12 01:34:58.324683 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 01:34:58.324716 systemd-networkd[1389]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 12 01:34:58.326315 systemd-networkd[1389]: eth0: Link UP Mar 12 01:34:58.326351 systemd-networkd[1389]: eth0: Gained carrier Mar 12 01:34:58.326371 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 01:34:58.328154 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 12 01:34:58.332069 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 12 01:34:58.336844 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 01:34:58.338318 systemd-resolved[1390]: Positive Trust Anchors: Mar 12 01:34:58.338756 systemd-resolved[1390]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 12 01:34:58.338811 systemd-resolved[1390]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 12 01:34:58.341333 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 12 01:34:58.345927 systemd-resolved[1390]: Defaulting to hostname 'linux'. Mar 12 01:34:58.346163 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 12 01:34:58.350308 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 12 01:34:58.354237 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 12 01:34:58.354642 systemd-networkd[1389]: eth0: DHCPv4 address 10.0.0.111/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 12 01:34:58.355939 systemd-timesyncd[1391]: Network configuration changed, trying to establish connection. Mar 12 01:34:59.010987 systemd-resolved[1390]: Clock change detected. Flushing caches. Mar 12 01:34:59.011089 systemd-timesyncd[1391]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 12 01:34:59.011147 systemd-timesyncd[1391]: Initial clock synchronization to Thu 2026-03-12 01:34:59.010899 UTC. Mar 12 01:34:59.017388 systemd[1]: Reached target network.target - Network. Mar 12 01:34:59.021243 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 12 01:34:59.026225 systemd[1]: Reached target sysinit.target - System Initialization. Mar 12 01:34:59.030634 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 12 01:34:59.035739 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 12 01:34:59.040946 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 12 01:34:59.046022 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 12 01:34:59.046182 systemd[1]: Reached target paths.target - Path Units. Mar 12 01:34:59.050356 systemd[1]: Reached target time-set.target - System Time Set. Mar 12 01:34:59.055078 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 12 01:34:59.059702 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 12 01:34:59.065118 systemd[1]: Reached target timers.target - Timer Units. Mar 12 01:34:59.069203 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 12 01:34:59.074644 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 12 01:34:59.083241 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 12 01:34:59.088832 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 12 01:34:59.094043 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 12 01:34:59.097831 systemd[1]: Reached target sockets.target - Socket Units. Mar 12 01:34:59.100566 systemd[1]: Reached target basic.target - Basic System. Mar 12 01:34:59.103819 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 12 01:34:59.103888 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 12 01:34:59.105740 systemd[1]: Starting containerd.service - containerd container runtime... Mar 12 01:34:59.111745 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 12 01:34:59.117497 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 12 01:34:59.124099 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 12 01:34:59.128435 jq[1427]: false Mar 12 01:34:59.129257 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 12 01:34:59.131346 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 12 01:34:59.138246 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 12 01:34:59.147840 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 12 01:34:59.152603 extend-filesystems[1428]: Found loop3 Mar 12 01:34:59.154974 extend-filesystems[1428]: Found loop4 Mar 12 01:34:59.154974 extend-filesystems[1428]: Found loop5 Mar 12 01:34:59.154974 extend-filesystems[1428]: Found sr0 Mar 12 01:34:59.154974 extend-filesystems[1428]: Found vda Mar 12 01:34:59.154974 extend-filesystems[1428]: Found vda1 Mar 12 01:34:59.154974 extend-filesystems[1428]: Found vda2 Mar 12 01:34:59.154974 extend-filesystems[1428]: Found vda3 Mar 12 01:34:59.154974 extend-filesystems[1428]: Found usr Mar 12 01:34:59.154974 extend-filesystems[1428]: Found vda4 Mar 12 01:34:59.154974 extend-filesystems[1428]: Found vda6 Mar 12 01:34:59.154974 extend-filesystems[1428]: Found vda7 Mar 12 01:34:59.154974 extend-filesystems[1428]: Found vda9 Mar 12 01:34:59.154974 extend-filesystems[1428]: Checking size of /dev/vda9 Mar 12 01:34:59.157321 dbus-daemon[1426]: [system] SELinux support is enabled Mar 12 01:34:59.196795 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1330) Mar 12 01:34:59.163505 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 12 01:34:59.168708 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 12 01:34:59.175361 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 12 01:34:59.175896 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 12 01:34:59.187258 systemd[1]: Starting update-engine.service - Update Engine... Mar 12 01:34:59.207373 extend-filesystems[1428]: Resized partition /dev/vda9 Mar 12 01:34:59.212707 extend-filesystems[1445]: resize2fs 1.47.1 (20-May-2024) Mar 12 01:34:59.230929 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 12 01:34:59.216543 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 12 01:34:59.224322 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 12 01:34:59.240917 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 12 01:34:59.241843 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 12 01:34:59.242583 systemd[1]: motdgen.service: Deactivated successfully. Mar 12 01:34:59.242867 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 12 01:34:59.249052 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 12 01:34:59.249182 jq[1446]: true Mar 12 01:34:59.249399 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 12 01:34:59.264230 update_engine[1442]: I20260312 01:34:59.264106 1442 main.cc:92] Flatcar Update Engine starting Mar 12 01:34:59.269432 update_engine[1442]: I20260312 01:34:59.269310 1442 update_check_scheduler.cc:74] Next update check in 5m8s Mar 12 01:34:59.269498 systemd-logind[1441]: Watching system buttons on /dev/input/event1 (Power Button) Mar 12 01:34:59.269532 systemd-logind[1441]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 12 01:34:59.270144 systemd-logind[1441]: New seat seat0. Mar 12 01:34:59.283151 jq[1453]: true Mar 12 01:34:59.276371 (ntainerd)[1454]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 12 01:34:59.276644 systemd[1]: Started systemd-logind.service - User Login Management. Mar 12 01:34:59.292561 dbus-daemon[1426]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 12 01:34:59.295135 tar[1452]: linux-amd64/LICENSE Mar 12 01:34:59.295420 tar[1452]: linux-amd64/helm Mar 12 01:34:59.300339 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 12 01:34:59.306580 systemd[1]: Started update-engine.service - Update Engine. Mar 12 01:34:59.312560 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 12 01:34:59.332610 extend-filesystems[1445]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 12 01:34:59.332610 extend-filesystems[1445]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 12 01:34:59.332610 extend-filesystems[1445]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 12 01:34:59.312789 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 12 01:34:59.366765 extend-filesystems[1428]: Resized filesystem in /dev/vda9 Mar 12 01:34:59.317721 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 12 01:34:59.318152 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 12 01:34:59.332038 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 12 01:34:59.345913 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 12 01:34:59.346232 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 12 01:34:59.376663 locksmithd[1472]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 12 01:34:59.392827 bash[1480]: Updated "/home/core/.ssh/authorized_keys" Mar 12 01:34:59.395125 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 12 01:34:59.402909 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 12 01:34:59.427594 sshd_keygen[1451]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 12 01:34:59.460043 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 12 01:34:59.472683 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 12 01:34:59.489810 systemd[1]: issuegen.service: Deactivated successfully. Mar 12 01:34:59.490327 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 12 01:34:59.500863 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 12 01:34:59.523677 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 12 01:34:59.526197 containerd[1454]: time="2026-03-12T01:34:59.526060269Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 12 01:34:59.544746 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 12 01:34:59.549923 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 12 01:34:59.553624 systemd[1]: Reached target getty.target - Login Prompts. Mar 12 01:34:59.559776 containerd[1454]: time="2026-03-12T01:34:59.559646864Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 12 01:34:59.564555 containerd[1454]: time="2026-03-12T01:34:59.564494164Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 12 01:34:59.564555 containerd[1454]: time="2026-03-12T01:34:59.564547232Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 12 01:34:59.564654 containerd[1454]: time="2026-03-12T01:34:59.564570045Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 12 01:34:59.564884 containerd[1454]: time="2026-03-12T01:34:59.564822056Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 12 01:34:59.564884 containerd[1454]: time="2026-03-12T01:34:59.564872510Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 12 01:34:59.565062 containerd[1454]: time="2026-03-12T01:34:59.565039612Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 12 01:34:59.565095 containerd[1454]: time="2026-03-12T01:34:59.565062224Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 12 01:34:59.565401 containerd[1454]: time="2026-03-12T01:34:59.565361333Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 12 01:34:59.565447 containerd[1454]: time="2026-03-12T01:34:59.565398742Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 12 01:34:59.565447 containerd[1454]: time="2026-03-12T01:34:59.565420002Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 12 01:34:59.565447 containerd[1454]: time="2026-03-12T01:34:59.565435922Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 12 01:34:59.565594 containerd[1454]: time="2026-03-12T01:34:59.565557238Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 12 01:34:59.565948 containerd[1454]: time="2026-03-12T01:34:59.565891632Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 12 01:34:59.566907 containerd[1454]: time="2026-03-12T01:34:59.566125749Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 12 01:34:59.566907 containerd[1454]: time="2026-03-12T01:34:59.566170944Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 12 01:34:59.566907 containerd[1454]: time="2026-03-12T01:34:59.566384573Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 12 01:34:59.566907 containerd[1454]: time="2026-03-12T01:34:59.566473829Z" level=info msg="metadata content store policy set" policy=shared Mar 12 01:34:59.575212 containerd[1454]: time="2026-03-12T01:34:59.575109667Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 12 01:34:59.575212 containerd[1454]: time="2026-03-12T01:34:59.575194595Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 12 01:34:59.575212 containerd[1454]: time="2026-03-12T01:34:59.575212629Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 12 01:34:59.575212 containerd[1454]: time="2026-03-12T01:34:59.575227336Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 12 01:34:59.575463 containerd[1454]: time="2026-03-12T01:34:59.575240371Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 12 01:34:59.575547 containerd[1454]: time="2026-03-12T01:34:59.575487642Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 12 01:34:59.576023 containerd[1454]: time="2026-03-12T01:34:59.575911303Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 12 01:34:59.576129 containerd[1454]: time="2026-03-12T01:34:59.576091199Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 12 01:34:59.576129 containerd[1454]: time="2026-03-12T01:34:59.576108291Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 12 01:34:59.576129 containerd[1454]: time="2026-03-12T01:34:59.576121516Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 12 01:34:59.576224 containerd[1454]: time="2026-03-12T01:34:59.576142886Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 12 01:34:59.576224 containerd[1454]: time="2026-03-12T01:34:59.576164265Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 12 01:34:59.576224 containerd[1454]: time="2026-03-12T01:34:59.576209360Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 12 01:34:59.576392 containerd[1454]: time="2026-03-12T01:34:59.576231671Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 12 01:34:59.576392 containerd[1454]: time="2026-03-12T01:34:59.576252530Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 12 01:34:59.576392 containerd[1454]: time="2026-03-12T01:34:59.576341056Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 12 01:34:59.576392 containerd[1454]: time="2026-03-12T01:34:59.576363938Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 12 01:34:59.576392 containerd[1454]: time="2026-03-12T01:34:59.576383525Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 12 01:34:59.576513 containerd[1454]: time="2026-03-12T01:34:59.576409553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 12 01:34:59.576513 containerd[1454]: time="2026-03-12T01:34:59.576432847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 12 01:34:59.576513 containerd[1454]: time="2026-03-12T01:34:59.576450901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 12 01:34:59.576513 containerd[1454]: time="2026-03-12T01:34:59.576470918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 12 01:34:59.576513 containerd[1454]: time="2026-03-12T01:34:59.576490104Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 12 01:34:59.576843 containerd[1454]: time="2026-03-12T01:34:59.576511253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 12 01:34:59.576843 containerd[1454]: time="2026-03-12T01:34:59.576531391Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 12 01:34:59.576843 containerd[1454]: time="2026-03-12T01:34:59.576553111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 12 01:34:59.576843 containerd[1454]: time="2026-03-12T01:34:59.576581635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 12 01:34:59.576843 containerd[1454]: time="2026-03-12T01:34:59.576603456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 12 01:34:59.576843 containerd[1454]: time="2026-03-12T01:34:59.576624655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 12 01:34:59.576843 containerd[1454]: time="2026-03-12T01:34:59.576649351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 12 01:34:59.576843 containerd[1454]: time="2026-03-12T01:34:59.576670089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 12 01:34:59.576843 containerd[1454]: time="2026-03-12T01:34:59.576695067Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 12 01:34:59.576843 containerd[1454]: time="2026-03-12T01:34:59.576727517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 12 01:34:59.576843 containerd[1454]: time="2026-03-12T01:34:59.576744508Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 12 01:34:59.576843 containerd[1454]: time="2026-03-12T01:34:59.576764746Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 12 01:34:59.576843 containerd[1454]: time="2026-03-12T01:34:59.576836119Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 12 01:34:59.577205 containerd[1454]: time="2026-03-12T01:34:59.576861717Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 12 01:34:59.577205 containerd[1454]: time="2026-03-12T01:34:59.576880302Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 12 01:34:59.577205 containerd[1454]: time="2026-03-12T01:34:59.576899107Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 12 01:34:59.577205 containerd[1454]: time="2026-03-12T01:34:59.576913404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 12 01:34:59.577205 containerd[1454]: time="2026-03-12T01:34:59.576932871Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 12 01:34:59.577205 containerd[1454]: time="2026-03-12T01:34:59.576946876Z" level=info msg="NRI interface is disabled by configuration." Mar 12 01:34:59.577205 containerd[1454]: time="2026-03-12T01:34:59.576962796Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 12 01:34:59.577441 containerd[1454]: time="2026-03-12T01:34:59.577340732Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 12 01:34:59.577441 containerd[1454]: time="2026-03-12T01:34:59.577396836Z" level=info msg="Connect containerd service" Mar 12 01:34:59.577657 containerd[1454]: time="2026-03-12T01:34:59.577470725Z" level=info msg="using legacy CRI server" Mar 12 01:34:59.577657 containerd[1454]: time="2026-03-12T01:34:59.577506050Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 12 01:34:59.577657 containerd[1454]: time="2026-03-12T01:34:59.577580360Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 12 01:34:59.578520 containerd[1454]: time="2026-03-12T01:34:59.578440555Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 12 01:34:59.578915 containerd[1454]: time="2026-03-12T01:34:59.578793020Z" level=info msg="Start subscribing containerd event" Mar 12 01:34:59.579334 containerd[1454]: time="2026-03-12T01:34:59.579158934Z" level=info msg="Start recovering state" Mar 12 01:34:59.579334 containerd[1454]: time="2026-03-12T01:34:59.578851052Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 12 01:34:59.579334 containerd[1454]: time="2026-03-12T01:34:59.579260353Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 12 01:34:59.579509 containerd[1454]: time="2026-03-12T01:34:59.579463161Z" level=info msg="Start event monitor" Mar 12 01:34:59.580144 containerd[1454]: time="2026-03-12T01:34:59.579606579Z" level=info msg="Start snapshots syncer" Mar 12 01:34:59.580144 containerd[1454]: time="2026-03-12T01:34:59.579672953Z" level=info msg="Start cni network conf syncer for default" Mar 12 01:34:59.580144 containerd[1454]: time="2026-03-12T01:34:59.579687440Z" level=info msg="Start streaming server" Mar 12 01:34:59.580144 containerd[1454]: time="2026-03-12T01:34:59.579863649Z" level=info msg="containerd successfully booted in 0.057511s" Mar 12 01:34:59.580133 systemd[1]: Started containerd.service - containerd container runtime. Mar 12 01:34:59.857105 tar[1452]: linux-amd64/README.md Mar 12 01:34:59.876887 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 12 01:35:00.214639 systemd-networkd[1389]: eth0: Gained IPv6LL Mar 12 01:35:00.218331 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 12 01:35:00.224799 systemd[1]: Reached target network-online.target - Network is Online. Mar 12 01:35:00.242815 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 12 01:35:00.249553 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:35:00.254352 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 12 01:35:00.297020 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 12 01:35:00.301869 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 12 01:35:00.302309 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 12 01:35:00.309391 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 12 01:35:01.238215 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:35:01.243082 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 12 01:35:01.247938 systemd[1]: Startup finished in 1.714s (kernel) + 6.512s (initrd) + 5.355s (userspace) = 13.582s. Mar 12 01:35:01.248904 (kubelet)[1538]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 01:35:01.739311 kubelet[1538]: E0312 01:35:01.739158 1538 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 01:35:01.743341 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 01:35:01.743602 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 01:35:01.744246 systemd[1]: kubelet.service: Consumed 1.078s CPU time. Mar 12 01:35:03.791337 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 12 01:35:03.805687 systemd[1]: Started sshd@0-10.0.0.111:22-10.0.0.1:51612.service - OpenSSH per-connection server daemon (10.0.0.1:51612). Mar 12 01:35:03.870298 sshd[1552]: Accepted publickey for core from 10.0.0.1 port 51612 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:35:03.874652 sshd[1552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:35:03.884465 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 12 01:35:03.894563 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 12 01:35:03.896836 systemd-logind[1441]: New session 1 of user core. Mar 12 01:35:03.908901 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 12 01:35:03.912185 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 12 01:35:03.924497 (systemd)[1556]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 12 01:35:04.073194 systemd[1556]: Queued start job for default target default.target. Mar 12 01:35:04.086563 systemd[1556]: Created slice app.slice - User Application Slice. Mar 12 01:35:04.086627 systemd[1556]: Reached target paths.target - Paths. Mar 12 01:35:04.086652 systemd[1556]: Reached target timers.target - Timers. Mar 12 01:35:04.089215 systemd[1556]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 12 01:35:04.108851 systemd[1556]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 12 01:35:04.109115 systemd[1556]: Reached target sockets.target - Sockets. Mar 12 01:35:04.109172 systemd[1556]: Reached target basic.target - Basic System. Mar 12 01:35:04.109240 systemd[1556]: Reached target default.target - Main User Target. Mar 12 01:35:04.109360 systemd[1556]: Startup finished in 176ms. Mar 12 01:35:04.109768 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 12 01:35:04.112509 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 12 01:35:04.198755 systemd[1]: Started sshd@1-10.0.0.111:22-10.0.0.1:51624.service - OpenSSH per-connection server daemon (10.0.0.1:51624). Mar 12 01:35:04.248959 sshd[1567]: Accepted publickey for core from 10.0.0.1 port 51624 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:35:04.251181 sshd[1567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:35:04.256897 systemd-logind[1441]: New session 2 of user core. Mar 12 01:35:04.266565 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 12 01:35:04.357203 sshd[1567]: pam_unix(sshd:session): session closed for user core Mar 12 01:35:04.374371 systemd[1]: Started sshd@2-10.0.0.111:22-10.0.0.1:51628.service - OpenSSH per-connection server daemon (10.0.0.1:51628). Mar 12 01:35:04.392110 systemd-logind[1441]: Session 2 logged out. Waiting for processes to exit. Mar 12 01:35:04.399952 systemd[1]: sshd@1-10.0.0.111:22-10.0.0.1:51624.service: Deactivated successfully. Mar 12 01:35:04.404196 systemd[1]: session-2.scope: Deactivated successfully. Mar 12 01:35:04.417851 systemd-logind[1441]: Removed session 2. Mar 12 01:35:04.448338 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 51628 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:35:04.450563 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:35:04.459231 systemd-logind[1441]: New session 3 of user core. Mar 12 01:35:04.471625 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 12 01:35:04.534764 sshd[1572]: pam_unix(sshd:session): session closed for user core Mar 12 01:35:04.547863 systemd[1]: sshd@2-10.0.0.111:22-10.0.0.1:51628.service: Deactivated successfully. Mar 12 01:35:04.550820 systemd[1]: session-3.scope: Deactivated successfully. Mar 12 01:35:04.553512 systemd-logind[1441]: Session 3 logged out. Waiting for processes to exit. Mar 12 01:35:04.564806 systemd[1]: Started sshd@3-10.0.0.111:22-10.0.0.1:51634.service - OpenSSH per-connection server daemon (10.0.0.1:51634). Mar 12 01:35:04.566350 systemd-logind[1441]: Removed session 3. Mar 12 01:35:04.605249 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 51634 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:35:04.607473 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:35:04.613206 systemd-logind[1441]: New session 4 of user core. Mar 12 01:35:04.629625 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 12 01:35:04.689681 sshd[1581]: pam_unix(sshd:session): session closed for user core Mar 12 01:35:04.706203 systemd[1]: sshd@3-10.0.0.111:22-10.0.0.1:51634.service: Deactivated successfully. Mar 12 01:35:04.708828 systemd[1]: session-4.scope: Deactivated successfully. Mar 12 01:35:04.710618 systemd-logind[1441]: Session 4 logged out. Waiting for processes to exit. Mar 12 01:35:04.719856 systemd[1]: Started sshd@4-10.0.0.111:22-10.0.0.1:51642.service - OpenSSH per-connection server daemon (10.0.0.1:51642). Mar 12 01:35:04.721555 systemd-logind[1441]: Removed session 4. Mar 12 01:35:04.761098 sshd[1589]: Accepted publickey for core from 10.0.0.1 port 51642 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:35:04.763425 sshd[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:35:04.770360 systemd-logind[1441]: New session 5 of user core. Mar 12 01:35:04.779662 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 12 01:35:04.845982 sudo[1593]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 12 01:35:04.846563 sudo[1593]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 01:35:04.863669 sudo[1593]: pam_unix(sudo:session): session closed for user root Mar 12 01:35:04.866054 sshd[1589]: pam_unix(sshd:session): session closed for user core Mar 12 01:35:04.882551 systemd[1]: sshd@4-10.0.0.111:22-10.0.0.1:51642.service: Deactivated successfully. Mar 12 01:35:04.884576 systemd[1]: session-5.scope: Deactivated successfully. Mar 12 01:35:04.886224 systemd-logind[1441]: Session 5 logged out. Waiting for processes to exit. Mar 12 01:35:04.893664 systemd[1]: Started sshd@5-10.0.0.111:22-10.0.0.1:51644.service - OpenSSH per-connection server daemon (10.0.0.1:51644). Mar 12 01:35:04.895157 systemd-logind[1441]: Removed session 5. Mar 12 01:35:04.928222 sshd[1598]: Accepted publickey for core from 10.0.0.1 port 51644 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:35:04.930106 sshd[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:35:04.936202 systemd-logind[1441]: New session 6 of user core. Mar 12 01:35:04.951661 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 12 01:35:05.011963 sudo[1602]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 12 01:35:05.012568 sudo[1602]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 01:35:05.018126 sudo[1602]: pam_unix(sudo:session): session closed for user root Mar 12 01:35:05.027929 sudo[1601]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 12 01:35:05.028539 sudo[1601]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 01:35:05.051657 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 12 01:35:05.054249 auditctl[1605]: No rules Mar 12 01:35:05.055971 systemd[1]: audit-rules.service: Deactivated successfully. Mar 12 01:35:05.056445 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 12 01:35:05.059488 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 12 01:35:05.103522 augenrules[1623]: No rules Mar 12 01:35:05.104710 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 12 01:35:05.106335 sudo[1601]: pam_unix(sudo:session): session closed for user root Mar 12 01:35:05.108921 sshd[1598]: pam_unix(sshd:session): session closed for user core Mar 12 01:35:05.117864 systemd[1]: sshd@5-10.0.0.111:22-10.0.0.1:51644.service: Deactivated successfully. Mar 12 01:35:05.120426 systemd[1]: session-6.scope: Deactivated successfully. Mar 12 01:35:05.122048 systemd-logind[1441]: Session 6 logged out. Waiting for processes to exit. Mar 12 01:35:05.134755 systemd[1]: Started sshd@6-10.0.0.111:22-10.0.0.1:51658.service - OpenSSH per-connection server daemon (10.0.0.1:51658). Mar 12 01:35:05.136158 systemd-logind[1441]: Removed session 6. Mar 12 01:35:05.168053 sshd[1631]: Accepted publickey for core from 10.0.0.1 port 51658 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:35:05.170209 sshd[1631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:35:05.176396 systemd-logind[1441]: New session 7 of user core. Mar 12 01:35:05.187473 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 12 01:35:05.250047 sudo[1635]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 12 01:35:05.250482 sudo[1635]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 01:35:05.709577 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 12 01:35:05.709840 (dockerd)[1654]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 12 01:35:06.034390 dockerd[1654]: time="2026-03-12T01:35:06.033720714Z" level=info msg="Starting up" Mar 12 01:35:06.243850 dockerd[1654]: time="2026-03-12T01:35:06.243715134Z" level=info msg="Loading containers: start." Mar 12 01:35:06.422387 kernel: Initializing XFRM netlink socket Mar 12 01:35:06.535817 systemd-networkd[1389]: docker0: Link UP Mar 12 01:35:06.560889 dockerd[1654]: time="2026-03-12T01:35:06.560805119Z" level=info msg="Loading containers: done." Mar 12 01:35:06.578311 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck828401987-merged.mount: Deactivated successfully. Mar 12 01:35:06.581954 dockerd[1654]: time="2026-03-12T01:35:06.581871684Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 12 01:35:06.582113 dockerd[1654]: time="2026-03-12T01:35:06.582072299Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 12 01:35:06.582336 dockerd[1654]: time="2026-03-12T01:35:06.582230624Z" level=info msg="Daemon has completed initialization" Mar 12 01:35:06.626060 dockerd[1654]: time="2026-03-12T01:35:06.625976113Z" level=info msg="API listen on /run/docker.sock" Mar 12 01:35:06.626212 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 12 01:35:07.181765 containerd[1454]: time="2026-03-12T01:35:07.181627379Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 12 01:35:07.766366 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3875713032.mount: Deactivated successfully. Mar 12 01:35:08.866979 containerd[1454]: time="2026-03-12T01:35:08.866876393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:35:08.868167 containerd[1454]: time="2026-03-12T01:35:08.868073568Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=27074497" Mar 12 01:35:08.870048 containerd[1454]: time="2026-03-12T01:35:08.869969698Z" level=info msg="ImageCreate event name:\"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:35:08.874223 containerd[1454]: time="2026-03-12T01:35:08.874138249Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:35:08.875804 containerd[1454]: time="2026-03-12T01:35:08.875735260Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"27071096\" in 1.694051375s" Mar 12 01:35:08.875804 containerd[1454]: time="2026-03-12T01:35:08.875802305Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\"" Mar 12 01:35:08.877230 containerd[1454]: time="2026-03-12T01:35:08.876964004Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 12 01:35:10.037592 containerd[1454]: time="2026-03-12T01:35:10.037476003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:35:10.038601 containerd[1454]: time="2026-03-12T01:35:10.038555199Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=21165823" Mar 12 01:35:10.040382 containerd[1454]: time="2026-03-12T01:35:10.040228502Z" level=info msg="ImageCreate event name:\"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:35:10.044794 containerd[1454]: time="2026-03-12T01:35:10.044628823Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:35:10.046483 containerd[1454]: time="2026-03-12T01:35:10.046379866Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"22822771\" in 1.169372963s" Mar 12 01:35:10.046483 containerd[1454]: time="2026-03-12T01:35:10.046442884Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\"" Mar 12 01:35:10.047454 containerd[1454]: time="2026-03-12T01:35:10.047103950Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 12 01:35:11.111176 containerd[1454]: time="2026-03-12T01:35:11.110344869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:35:11.112681 containerd[1454]: time="2026-03-12T01:35:11.112526297Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=15729824" Mar 12 01:35:11.115650 containerd[1454]: time="2026-03-12T01:35:11.115523988Z" level=info msg="ImageCreate event name:\"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:35:11.120478 containerd[1454]: time="2026-03-12T01:35:11.120336626Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:35:11.122652 containerd[1454]: time="2026-03-12T01:35:11.122139586Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"17386790\" in 1.074985623s" Mar 12 01:35:11.122652 containerd[1454]: time="2026-03-12T01:35:11.122216730Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\"" Mar 12 01:35:11.123351 containerd[1454]: time="2026-03-12T01:35:11.123247167Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 12 01:35:12.829042 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 12 01:35:12.866152 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:35:14.419683 kernel: hrtimer: interrupt took 23109985 ns Mar 12 01:35:15.343918 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:35:15.401222 (kubelet)[1883]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 01:35:15.648662 kubelet[1883]: E0312 01:35:15.647454 1883 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 01:35:15.687705 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 01:35:15.688232 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 01:35:16.231822 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3058506959.mount: Deactivated successfully. Mar 12 01:35:16.897894 containerd[1454]: time="2026-03-12T01:35:16.897737492Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:35:16.898882 containerd[1454]: time="2026-03-12T01:35:16.898804621Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=25861770" Mar 12 01:35:16.904132 containerd[1454]: time="2026-03-12T01:35:16.900786895Z" level=info msg="ImageCreate event name:\"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:35:16.905462 containerd[1454]: time="2026-03-12T01:35:16.905365878Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:35:16.906749 containerd[1454]: time="2026-03-12T01:35:16.906669717Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"25860789\" in 5.783320539s" Mar 12 01:35:16.906749 containerd[1454]: time="2026-03-12T01:35:16.906724950Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\"" Mar 12 01:35:16.908420 containerd[1454]: time="2026-03-12T01:35:16.908208260Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 12 01:35:17.689583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1150073237.mount: Deactivated successfully. Mar 12 01:35:20.668966 containerd[1454]: time="2026-03-12T01:35:20.668857919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:35:20.670604 containerd[1454]: time="2026-03-12T01:35:20.670529661Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Mar 12 01:35:20.672227 containerd[1454]: time="2026-03-12T01:35:20.672104571Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:35:20.676579 containerd[1454]: time="2026-03-12T01:35:20.676498373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:35:20.678084 containerd[1454]: time="2026-03-12T01:35:20.678001259Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 3.769609808s" Mar 12 01:35:20.678153 containerd[1454]: time="2026-03-12T01:35:20.678080838Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Mar 12 01:35:20.678876 containerd[1454]: time="2026-03-12T01:35:20.678793308Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 12 01:35:21.206230 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount766556055.mount: Deactivated successfully. Mar 12 01:35:21.213614 containerd[1454]: time="2026-03-12T01:35:21.213535652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:35:21.214571 containerd[1454]: time="2026-03-12T01:35:21.214488436Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 12 01:35:21.215986 containerd[1454]: time="2026-03-12T01:35:21.215915400Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:35:21.219124 containerd[1454]: time="2026-03-12T01:35:21.219005539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:35:21.220530 containerd[1454]: time="2026-03-12T01:35:21.220438940Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 541.605679ms" Mar 12 01:35:21.220530 containerd[1454]: time="2026-03-12T01:35:21.220486920Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 12 01:35:21.221208 containerd[1454]: time="2026-03-12T01:35:21.221130439Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 12 01:35:22.120613 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4137999076.mount: Deactivated successfully. Mar 12 01:35:24.919334 containerd[1454]: time="2026-03-12T01:35:24.919074316Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:35:24.920609 containerd[1454]: time="2026-03-12T01:35:24.920360447Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22860674" Mar 12 01:35:24.922608 containerd[1454]: time="2026-03-12T01:35:24.922542651Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:35:24.927466 containerd[1454]: time="2026-03-12T01:35:24.927236063Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:35:24.929257 containerd[1454]: time="2026-03-12T01:35:24.929123998Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 3.707906978s" Mar 12 01:35:24.929257 containerd[1454]: time="2026-03-12T01:35:24.929194550Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Mar 12 01:35:25.938593 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 12 01:35:25.947648 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:35:26.432786 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:35:26.440422 (kubelet)[2047]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 01:35:26.582124 kubelet[2047]: E0312 01:35:26.581948 2047 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 01:35:26.587878 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 01:35:26.588346 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 01:35:29.408058 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:35:29.426836 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:35:29.487162 systemd[1]: Reloading requested from client PID 2062 ('systemctl') (unit session-7.scope)... Mar 12 01:35:29.487203 systemd[1]: Reloading... Mar 12 01:35:29.652321 zram_generator::config[2102]: No configuration found. Mar 12 01:35:29.812882 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 12 01:35:29.924608 systemd[1]: Reloading finished in 432 ms. Mar 12 01:35:29.996186 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 12 01:35:29.996411 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 12 01:35:29.996829 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:35:30.012015 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:35:30.216010 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:35:30.225160 (kubelet)[2150]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 12 01:35:30.352927 kubelet[2150]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 12 01:35:30.352927 kubelet[2150]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 01:35:30.353518 kubelet[2150]: I0312 01:35:30.352951 2150 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 12 01:35:30.976591 kubelet[2150]: I0312 01:35:30.976480 2150 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 12 01:35:30.976591 kubelet[2150]: I0312 01:35:30.976546 2150 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 12 01:35:30.982139 kubelet[2150]: I0312 01:35:30.979868 2150 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 12 01:35:30.982139 kubelet[2150]: I0312 01:35:30.979932 2150 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 12 01:35:30.982139 kubelet[2150]: I0312 01:35:30.980377 2150 server.go:956] "Client rotation is on, will bootstrap in background" Mar 12 01:35:31.173667 kubelet[2150]: E0312 01:35:31.173534 2150 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.111:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 12 01:35:31.175836 kubelet[2150]: I0312 01:35:31.175761 2150 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 12 01:35:31.187064 kubelet[2150]: E0312 01:35:31.186876 2150 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 12 01:35:31.187064 kubelet[2150]: I0312 01:35:31.187057 2150 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 12 01:35:31.198732 kubelet[2150]: I0312 01:35:31.198626 2150 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 12 01:35:31.201308 kubelet[2150]: I0312 01:35:31.201173 2150 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 12 01:35:31.201437 kubelet[2150]: I0312 01:35:31.201226 2150 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 12 01:35:31.201540 kubelet[2150]: I0312 01:35:31.201443 2150 topology_manager.go:138] "Creating topology manager with none policy" Mar 12 01:35:31.201540 kubelet[2150]: I0312 01:35:31.201455 2150 container_manager_linux.go:306] "Creating device plugin manager" Mar 12 01:35:31.201616 kubelet[2150]: I0312 01:35:31.201592 2150 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 12 01:35:31.205398 kubelet[2150]: I0312 01:35:31.205325 2150 state_mem.go:36] "Initialized new in-memory state store" Mar 12 01:35:31.205880 kubelet[2150]: I0312 01:35:31.205826 2150 kubelet.go:475] "Attempting to sync node with API server" Mar 12 01:35:31.205952 kubelet[2150]: I0312 01:35:31.205909 2150 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 12 01:35:31.205952 kubelet[2150]: I0312 01:35:31.205943 2150 kubelet.go:387] "Adding apiserver pod source" Mar 12 01:35:31.206599 kubelet[2150]: I0312 01:35:31.205963 2150 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 12 01:35:31.207003 kubelet[2150]: E0312 01:35:31.206933 2150 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.111:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 12 01:35:31.207003 kubelet[2150]: E0312 01:35:31.206934 2150 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.111:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 12 01:35:31.208807 kubelet[2150]: I0312 01:35:31.208744 2150 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 12 01:35:31.209240 kubelet[2150]: I0312 01:35:31.209187 2150 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 12 01:35:31.209240 kubelet[2150]: I0312 01:35:31.209235 2150 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 12 01:35:31.209391 kubelet[2150]: W0312 01:35:31.209356 2150 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 12 01:35:31.214944 kubelet[2150]: I0312 01:35:31.214888 2150 server.go:1262] "Started kubelet" Mar 12 01:35:31.217307 kubelet[2150]: I0312 01:35:31.215683 2150 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 12 01:35:31.217307 kubelet[2150]: I0312 01:35:31.215745 2150 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 12 01:35:31.217307 kubelet[2150]: I0312 01:35:31.216223 2150 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 12 01:35:31.217307 kubelet[2150]: I0312 01:35:31.216364 2150 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 12 01:35:31.217307 kubelet[2150]: I0312 01:35:31.216533 2150 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 12 01:35:31.226900 kubelet[2150]: E0312 01:35:31.224884 2150 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.111:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.111:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189bf428109a5cd0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-12 01:35:31.214843088 +0000 UTC m=+0.973076543,LastTimestamp:2026-03-12 01:35:31.214843088 +0000 UTC m=+0.973076543,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 12 01:35:31.227484 kubelet[2150]: I0312 01:35:31.227459 2150 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 12 01:35:31.231314 kubelet[2150]: I0312 01:35:31.231174 2150 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 12 01:35:31.231538 kubelet[2150]: E0312 01:35:31.231494 2150 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 12 01:35:31.231842 kubelet[2150]: I0312 01:35:31.231825 2150 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 12 01:35:31.231949 kubelet[2150]: I0312 01:35:31.231904 2150 reconciler.go:29] "Reconciler: start to sync state" Mar 12 01:35:31.232101 kubelet[2150]: E0312 01:35:31.232060 2150 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.111:6443: connect: connection refused" interval="200ms" Mar 12 01:35:31.234365 kubelet[2150]: I0312 01:35:31.233998 2150 server.go:310] "Adding debug handlers to kubelet server" Mar 12 01:35:31.234365 kubelet[2150]: E0312 01:35:31.234107 2150 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.111:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 12 01:35:31.235127 kubelet[2150]: I0312 01:35:31.234679 2150 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 12 01:35:31.236184 kubelet[2150]: E0312 01:35:31.236149 2150 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 12 01:35:31.236698 kubelet[2150]: I0312 01:35:31.236677 2150 factory.go:223] Registration of the containerd container factory successfully Mar 12 01:35:31.236698 kubelet[2150]: I0312 01:35:31.236697 2150 factory.go:223] Registration of the systemd container factory successfully Mar 12 01:35:31.268345 kubelet[2150]: I0312 01:35:31.268222 2150 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 12 01:35:31.268345 kubelet[2150]: I0312 01:35:31.268260 2150 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 12 01:35:31.268345 kubelet[2150]: I0312 01:35:31.268314 2150 state_mem.go:36] "Initialized new in-memory state store" Mar 12 01:35:31.278532 kubelet[2150]: I0312 01:35:31.276740 2150 policy_none.go:49] "None policy: Start" Mar 12 01:35:31.278532 kubelet[2150]: I0312 01:35:31.277062 2150 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 12 01:35:31.278532 kubelet[2150]: I0312 01:35:31.277644 2150 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 12 01:35:31.278532 kubelet[2150]: I0312 01:35:31.277739 2150 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 12 01:35:31.282500 kubelet[2150]: I0312 01:35:31.282416 2150 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 12 01:35:31.282594 kubelet[2150]: I0312 01:35:31.282547 2150 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 12 01:35:31.282594 kubelet[2150]: I0312 01:35:31.282586 2150 kubelet.go:2428] "Starting kubelet main sync loop" Mar 12 01:35:31.282661 kubelet[2150]: E0312 01:35:31.282647 2150 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 12 01:35:31.283972 kubelet[2150]: E0312 01:35:31.283704 2150 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.111:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 12 01:35:31.287883 kubelet[2150]: I0312 01:35:31.287776 2150 policy_none.go:47] "Start" Mar 12 01:35:31.304751 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 12 01:35:31.328757 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 12 01:35:31.332253 kubelet[2150]: E0312 01:35:31.331964 2150 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 12 01:35:31.338433 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 12 01:35:31.358074 kubelet[2150]: E0312 01:35:31.357762 2150 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 12 01:35:31.358738 kubelet[2150]: I0312 01:35:31.358089 2150 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 12 01:35:31.358738 kubelet[2150]: I0312 01:35:31.358105 2150 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 12 01:35:31.358738 kubelet[2150]: I0312 01:35:31.358486 2150 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 12 01:35:31.360921 kubelet[2150]: E0312 01:35:31.360801 2150 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 12 01:35:31.360921 kubelet[2150]: E0312 01:35:31.360867 2150 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 12 01:35:31.411510 systemd[1]: Created slice kubepods-burstable-podd170600fb5d3f4055cac98ea8e370ac0.slice - libcontainer container kubepods-burstable-podd170600fb5d3f4055cac98ea8e370ac0.slice. Mar 12 01:35:31.425623 kubelet[2150]: E0312 01:35:31.425528 2150 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:35:31.430507 systemd[1]: Created slice kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice - libcontainer container kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice. Mar 12 01:35:31.432604 kubelet[2150]: I0312 01:35:31.432252 2150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 12 01:35:31.432604 kubelet[2150]: I0312 01:35:31.432359 2150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d170600fb5d3f4055cac98ea8e370ac0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d170600fb5d3f4055cac98ea8e370ac0\") " pod="kube-system/kube-apiserver-localhost" Mar 12 01:35:31.432604 kubelet[2150]: I0312 01:35:31.432385 2150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d170600fb5d3f4055cac98ea8e370ac0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d170600fb5d3f4055cac98ea8e370ac0\") " pod="kube-system/kube-apiserver-localhost" Mar 12 01:35:31.432604 kubelet[2150]: I0312 01:35:31.432406 2150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:35:31.432604 kubelet[2150]: I0312 01:35:31.432432 2150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:35:31.432897 kubelet[2150]: I0312 01:35:31.432450 2150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:35:31.432897 kubelet[2150]: I0312 01:35:31.432468 2150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:35:31.432897 kubelet[2150]: I0312 01:35:31.432487 2150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:35:31.432897 kubelet[2150]: I0312 01:35:31.432531 2150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d170600fb5d3f4055cac98ea8e370ac0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d170600fb5d3f4055cac98ea8e370ac0\") " pod="kube-system/kube-apiserver-localhost" Mar 12 01:35:31.432897 kubelet[2150]: E0312 01:35:31.432574 2150 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.111:6443: connect: connection refused" interval="400ms" Mar 12 01:35:31.441069 kubelet[2150]: E0312 01:35:31.440993 2150 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:35:31.444678 systemd[1]: Created slice kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice - libcontainer container kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice. Mar 12 01:35:31.447849 kubelet[2150]: E0312 01:35:31.447781 2150 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:35:31.462573 kubelet[2150]: I0312 01:35:31.462094 2150 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 12 01:35:31.462723 kubelet[2150]: E0312 01:35:31.462580 2150 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.111:6443/api/v1/nodes\": dial tcp 10.0.0.111:6443: connect: connection refused" node="localhost" Mar 12 01:35:31.640124 kubelet[2150]: E0312 01:35:31.639629 2150 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.111:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.111:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189bf428109a5cd0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-12 01:35:31.214843088 +0000 UTC m=+0.973076543,LastTimestamp:2026-03-12 01:35:31.214843088 +0000 UTC m=+0.973076543,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 12 01:35:31.665325 kubelet[2150]: I0312 01:35:31.665155 2150 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 12 01:35:31.665780 kubelet[2150]: E0312 01:35:31.665719 2150 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.111:6443/api/v1/nodes\": dial tcp 10.0.0.111:6443: connect: connection refused" node="localhost" Mar 12 01:35:31.730203 kubelet[2150]: E0312 01:35:31.730130 2150 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:35:31.731554 containerd[1454]: time="2026-03-12T01:35:31.731469515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d170600fb5d3f4055cac98ea8e370ac0,Namespace:kube-system,Attempt:0,}" Mar 12 01:35:31.745417 kubelet[2150]: E0312 01:35:31.745347 2150 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:35:31.746108 containerd[1454]: time="2026-03-12T01:35:31.745983503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,}" Mar 12 01:35:31.751472 kubelet[2150]: E0312 01:35:31.751373 2150 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:35:31.754548 containerd[1454]: time="2026-03-12T01:35:31.754502928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,}" Mar 12 01:35:31.833925 kubelet[2150]: E0312 01:35:31.833862 2150 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.111:6443: connect: connection refused" interval="800ms" Mar 12 01:35:32.068227 kubelet[2150]: I0312 01:35:32.067986 2150 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 12 01:35:32.068774 kubelet[2150]: E0312 01:35:32.068687 2150 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.111:6443/api/v1/nodes\": dial tcp 10.0.0.111:6443: connect: connection refused" node="localhost" Mar 12 01:35:32.174732 kubelet[2150]: E0312 01:35:32.174684 2150 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.111:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 12 01:35:32.191582 kubelet[2150]: E0312 01:35:32.191215 2150 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.111:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 12 01:35:32.209610 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3735908687.mount: Deactivated successfully. Mar 12 01:35:32.219775 containerd[1454]: time="2026-03-12T01:35:32.219587347Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 01:35:32.223954 containerd[1454]: time="2026-03-12T01:35:32.223842977Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 12 01:35:32.225408 containerd[1454]: time="2026-03-12T01:35:32.225333892Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 01:35:32.227172 containerd[1454]: time="2026-03-12T01:35:32.227103538Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 01:35:32.228545 containerd[1454]: time="2026-03-12T01:35:32.228480978Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 12 01:35:32.230000 containerd[1454]: time="2026-03-12T01:35:32.229952714Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 01:35:32.231300 containerd[1454]: time="2026-03-12T01:35:32.231209794Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 12 01:35:32.235794 containerd[1454]: time="2026-03-12T01:35:32.235671310Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 01:35:32.238137 containerd[1454]: time="2026-03-12T01:35:32.238005053Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 491.906575ms" Mar 12 01:35:32.239143 containerd[1454]: time="2026-03-12T01:35:32.239092850Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 507.516756ms" Mar 12 01:35:32.248011 containerd[1454]: time="2026-03-12T01:35:32.247788566Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 493.17965ms" Mar 12 01:35:32.403913 kubelet[2150]: E0312 01:35:32.403656 2150 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.111:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 12 01:35:32.559133 kubelet[2150]: E0312 01:35:32.557901 2150 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.111:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 12 01:35:32.568399 containerd[1454]: time="2026-03-12T01:35:32.567941951Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:35:32.568399 containerd[1454]: time="2026-03-12T01:35:32.568355942Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:35:32.569554 containerd[1454]: time="2026-03-12T01:35:32.568497196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:35:32.570917 containerd[1454]: time="2026-03-12T01:35:32.570602748Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:35:32.570917 containerd[1454]: time="2026-03-12T01:35:32.570831625Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:35:32.570917 containerd[1454]: time="2026-03-12T01:35:32.570859578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:35:32.571163 containerd[1454]: time="2026-03-12T01:35:32.571014767Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:35:32.575353 containerd[1454]: time="2026-03-12T01:35:32.574017589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:35:32.589451 containerd[1454]: time="2026-03-12T01:35:32.589091445Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:35:32.589451 containerd[1454]: time="2026-03-12T01:35:32.589175553Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:35:32.589451 containerd[1454]: time="2026-03-12T01:35:32.589197483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:35:32.589451 containerd[1454]: time="2026-03-12T01:35:32.589383080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:35:32.608612 systemd[1]: Started cri-containerd-122723f7ee9e29327c9043c52f9ee2fa6f9815675630ddcc942a3fa631d51714.scope - libcontainer container 122723f7ee9e29327c9043c52f9ee2fa6f9815675630ddcc942a3fa631d51714. Mar 12 01:35:32.639088 kubelet[2150]: E0312 01:35:32.636991 2150 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.111:6443: connect: connection refused" interval="1.6s" Mar 12 01:35:32.639721 systemd[1]: Started cri-containerd-1a65a82ce90a31adff35fb2c5930f6fe6f1c24eeb0244de6ed1ebede50266214.scope - libcontainer container 1a65a82ce90a31adff35fb2c5930f6fe6f1c24eeb0244de6ed1ebede50266214. Mar 12 01:35:32.854999 systemd[1]: Started cri-containerd-892be7c954d7d3c449e4d6e6c2448c0dc1c046b0d4a4f25e3ceb7b3c4c142d2c.scope - libcontainer container 892be7c954d7d3c449e4d6e6c2448c0dc1c046b0d4a4f25e3ceb7b3c4c142d2c. Mar 12 01:35:32.871709 kubelet[2150]: I0312 01:35:32.871580 2150 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 12 01:35:32.875406 kubelet[2150]: E0312 01:35:32.872011 2150 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.111:6443/api/v1/nodes\": dial tcp 10.0.0.111:6443: connect: connection refused" node="localhost" Mar 12 01:35:32.914953 containerd[1454]: time="2026-03-12T01:35:32.914863832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"122723f7ee9e29327c9043c52f9ee2fa6f9815675630ddcc942a3fa631d51714\"" Mar 12 01:35:32.917175 kubelet[2150]: E0312 01:35:32.916932 2150 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:35:32.926960 containerd[1454]: time="2026-03-12T01:35:32.926866771Z" level=info msg="CreateContainer within sandbox \"122723f7ee9e29327c9043c52f9ee2fa6f9815675630ddcc942a3fa631d51714\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 12 01:35:32.967592 containerd[1454]: time="2026-03-12T01:35:32.967379217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d170600fb5d3f4055cac98ea8e370ac0,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a65a82ce90a31adff35fb2c5930f6fe6f1c24eeb0244de6ed1ebede50266214\"" Mar 12 01:35:32.975213 kubelet[2150]: E0312 01:35:32.974788 2150 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:35:32.987916 containerd[1454]: time="2026-03-12T01:35:32.987630963Z" level=info msg="CreateContainer within sandbox \"1a65a82ce90a31adff35fb2c5930f6fe6f1c24eeb0244de6ed1ebede50266214\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 12 01:35:33.026526 containerd[1454]: time="2026-03-12T01:35:33.026461563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,} returns sandbox id \"892be7c954d7d3c449e4d6e6c2448c0dc1c046b0d4a4f25e3ceb7b3c4c142d2c\"" Mar 12 01:35:33.029231 kubelet[2150]: E0312 01:35:33.029195 2150 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:35:33.033630 containerd[1454]: time="2026-03-12T01:35:33.033540411Z" level=info msg="CreateContainer within sandbox \"122723f7ee9e29327c9043c52f9ee2fa6f9815675630ddcc942a3fa631d51714\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e305f7ebb57e86fd051219ad81b81fdef5ea6f7ce568c3342f3d060fc1731471\"" Mar 12 01:35:33.034617 containerd[1454]: time="2026-03-12T01:35:33.034502706Z" level=info msg="StartContainer for \"e305f7ebb57e86fd051219ad81b81fdef5ea6f7ce568c3342f3d060fc1731471\"" Mar 12 01:35:33.038602 containerd[1454]: time="2026-03-12T01:35:33.036761720Z" level=info msg="CreateContainer within sandbox \"892be7c954d7d3c449e4d6e6c2448c0dc1c046b0d4a4f25e3ceb7b3c4c142d2c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 12 01:35:33.042140 containerd[1454]: time="2026-03-12T01:35:33.041937497Z" level=info msg="CreateContainer within sandbox \"1a65a82ce90a31adff35fb2c5930f6fe6f1c24eeb0244de6ed1ebede50266214\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"417e67f237a117948de7f9c816a02c11becea9afea647a971f7e468dc9e9315d\"" Mar 12 01:35:33.049113 containerd[1454]: time="2026-03-12T01:35:33.048500132Z" level=info msg="StartContainer for \"417e67f237a117948de7f9c816a02c11becea9afea647a971f7e468dc9e9315d\"" Mar 12 01:35:33.116233 containerd[1454]: time="2026-03-12T01:35:33.116010836Z" level=info msg="CreateContainer within sandbox \"892be7c954d7d3c449e4d6e6c2448c0dc1c046b0d4a4f25e3ceb7b3c4c142d2c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"495b8819a04467aee88c274d3f84eb41547a7cbe2af5bcefb4faf02af0f40879\"" Mar 12 01:35:33.120191 containerd[1454]: time="2026-03-12T01:35:33.119759885Z" level=info msg="StartContainer for \"495b8819a04467aee88c274d3f84eb41547a7cbe2af5bcefb4faf02af0f40879\"" Mar 12 01:35:33.129574 systemd[1]: Started cri-containerd-e305f7ebb57e86fd051219ad81b81fdef5ea6f7ce568c3342f3d060fc1731471.scope - libcontainer container e305f7ebb57e86fd051219ad81b81fdef5ea6f7ce568c3342f3d060fc1731471. Mar 12 01:35:33.154750 systemd[1]: Started cri-containerd-417e67f237a117948de7f9c816a02c11becea9afea647a971f7e468dc9e9315d.scope - libcontainer container 417e67f237a117948de7f9c816a02c11becea9afea647a971f7e468dc9e9315d. Mar 12 01:35:33.221564 systemd[1]: Started cri-containerd-495b8819a04467aee88c274d3f84eb41547a7cbe2af5bcefb4faf02af0f40879.scope - libcontainer container 495b8819a04467aee88c274d3f84eb41547a7cbe2af5bcefb4faf02af0f40879. Mar 12 01:35:33.244530 containerd[1454]: time="2026-03-12T01:35:33.244483192Z" level=info msg="StartContainer for \"e305f7ebb57e86fd051219ad81b81fdef5ea6f7ce568c3342f3d060fc1731471\" returns successfully" Mar 12 01:35:33.246588 kubelet[2150]: E0312 01:35:33.246457 2150 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.111:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 12 01:35:33.255068 containerd[1454]: time="2026-03-12T01:35:33.254983167Z" level=info msg="StartContainer for \"417e67f237a117948de7f9c816a02c11becea9afea647a971f7e468dc9e9315d\" returns successfully" Mar 12 01:35:33.323500 kubelet[2150]: E0312 01:35:33.323238 2150 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:35:33.325384 kubelet[2150]: E0312 01:35:33.325221 2150 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:35:33.325493 kubelet[2150]: E0312 01:35:33.325397 2150 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:35:33.325821 kubelet[2150]: E0312 01:35:33.325754 2150 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:35:33.472970 containerd[1454]: time="2026-03-12T01:35:33.472788116Z" level=info msg="StartContainer for \"495b8819a04467aee88c274d3f84eb41547a7cbe2af5bcefb4faf02af0f40879\" returns successfully" Mar 12 01:35:34.336685 kubelet[2150]: E0312 01:35:34.336599 2150 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:35:34.337329 kubelet[2150]: E0312 01:35:34.336786 2150 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:35:34.337329 kubelet[2150]: E0312 01:35:34.337118 2150 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:35:34.337329 kubelet[2150]: E0312 01:35:34.337229 2150 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:35:34.482762 kubelet[2150]: I0312 01:35:34.482678 2150 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 12 01:35:35.340737 kubelet[2150]: E0312 01:35:35.340550 2150 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:35:35.341661 kubelet[2150]: E0312 01:35:35.340808 2150 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:35:36.251970 kubelet[2150]: E0312 01:35:36.251903 2150 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:35:36.252448 kubelet[2150]: E0312 01:35:36.252203 2150 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:35:36.342134 kubelet[2150]: E0312 01:35:36.342022 2150 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:35:36.342739 kubelet[2150]: E0312 01:35:36.342332 2150 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:35:37.054177 kubelet[2150]: E0312 01:35:37.053926 2150 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 12 01:35:37.127401 kubelet[2150]: I0312 01:35:37.127252 2150 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 12 01:35:37.143589 kubelet[2150]: I0312 01:35:37.143378 2150 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 12 01:35:37.211967 kubelet[2150]: I0312 01:35:37.211826 2150 apiserver.go:52] "Watching apiserver" Mar 12 01:35:37.252818 kubelet[2150]: I0312 01:35:37.251802 2150 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 12 01:35:37.284531 kubelet[2150]: E0312 01:35:37.284457 2150 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 12 01:35:37.284531 kubelet[2150]: I0312 01:35:37.284507 2150 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 12 01:35:37.286352 kubelet[2150]: E0312 01:35:37.286149 2150 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 12 01:35:37.286352 kubelet[2150]: I0312 01:35:37.286187 2150 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 12 01:35:37.288843 kubelet[2150]: E0312 01:35:37.288740 2150 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 12 01:35:40.763433 systemd[1]: Reloading requested from client PID 2444 ('systemctl') (unit session-7.scope)... Mar 12 01:35:40.763481 systemd[1]: Reloading... Mar 12 01:35:40.907232 zram_generator::config[2486]: No configuration found. Mar 12 01:35:41.068929 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 12 01:35:41.225660 systemd[1]: Reloading finished in 461 ms. Mar 12 01:35:41.301251 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:35:41.332750 systemd[1]: kubelet.service: Deactivated successfully. Mar 12 01:35:41.333535 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:35:41.333705 systemd[1]: kubelet.service: Consumed 2.544s CPU time, 127.2M memory peak, 0B memory swap peak. Mar 12 01:35:41.340750 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:35:41.593730 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:35:41.606343 (kubelet)[2528]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 12 01:35:41.753591 kubelet[2528]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 12 01:35:41.753591 kubelet[2528]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 01:35:41.754070 kubelet[2528]: I0312 01:35:41.753695 2528 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 12 01:35:41.768374 kubelet[2528]: I0312 01:35:41.766836 2528 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 12 01:35:41.768374 kubelet[2528]: I0312 01:35:41.767142 2528 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 12 01:35:41.768374 kubelet[2528]: I0312 01:35:41.767182 2528 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 12 01:35:41.768374 kubelet[2528]: I0312 01:35:41.767194 2528 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 12 01:35:41.768374 kubelet[2528]: I0312 01:35:41.767511 2528 server.go:956] "Client rotation is on, will bootstrap in background" Mar 12 01:35:41.769197 kubelet[2528]: I0312 01:35:41.769146 2528 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 12 01:35:41.773325 kubelet[2528]: I0312 01:35:41.773073 2528 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 12 01:35:41.777659 kubelet[2528]: E0312 01:35:41.777624 2528 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 12 01:35:41.777758 kubelet[2528]: I0312 01:35:41.777681 2528 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 12 01:35:41.789145 kubelet[2528]: I0312 01:35:41.788969 2528 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 12 01:35:41.789572 kubelet[2528]: I0312 01:35:41.789443 2528 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 12 01:35:41.789648 kubelet[2528]: I0312 01:35:41.789481 2528 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 12 01:35:41.789648 kubelet[2528]: I0312 01:35:41.789608 2528 topology_manager.go:138] "Creating topology manager with none policy" Mar 12 01:35:41.789648 kubelet[2528]: I0312 01:35:41.789617 2528 container_manager_linux.go:306] "Creating device plugin manager" Mar 12 01:35:41.789648 kubelet[2528]: I0312 01:35:41.789640 2528 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 12 01:35:41.789935 kubelet[2528]: I0312 01:35:41.789798 2528 state_mem.go:36] "Initialized new in-memory state store" Mar 12 01:35:41.790592 kubelet[2528]: I0312 01:35:41.789973 2528 kubelet.go:475] "Attempting to sync node with API server" Mar 12 01:35:41.790592 kubelet[2528]: I0312 01:35:41.790089 2528 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 12 01:35:41.790592 kubelet[2528]: I0312 01:35:41.790112 2528 kubelet.go:387] "Adding apiserver pod source" Mar 12 01:35:41.790592 kubelet[2528]: I0312 01:35:41.790126 2528 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 12 01:35:41.792533 kubelet[2528]: I0312 01:35:41.792422 2528 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 12 01:35:41.797856 kubelet[2528]: I0312 01:35:41.797750 2528 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 12 01:35:41.797856 kubelet[2528]: I0312 01:35:41.797834 2528 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 12 01:35:41.812970 kubelet[2528]: I0312 01:35:41.812907 2528 server.go:1262] "Started kubelet" Mar 12 01:35:41.813966 kubelet[2528]: I0312 01:35:41.813881 2528 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 12 01:35:41.825809 kubelet[2528]: I0312 01:35:41.823197 2528 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 12 01:35:41.831017 kubelet[2528]: I0312 01:35:41.827982 2528 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 12 01:35:41.831017 kubelet[2528]: I0312 01:35:41.829205 2528 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 12 01:35:41.831017 kubelet[2528]: I0312 01:35:41.814106 2528 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 12 01:35:41.831017 kubelet[2528]: I0312 01:35:41.828193 2528 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 12 01:35:41.831380 kubelet[2528]: I0312 01:35:41.831362 2528 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 12 01:35:41.831449 kubelet[2528]: E0312 01:35:41.831436 2528 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 12 01:35:41.834562 kubelet[2528]: I0312 01:35:41.834539 2528 server.go:310] "Adding debug handlers to kubelet server" Mar 12 01:35:41.845467 kubelet[2528]: I0312 01:35:41.845336 2528 factory.go:223] Registration of the systemd container factory successfully Mar 12 01:35:41.846064 kubelet[2528]: I0312 01:35:41.845974 2528 reconciler.go:29] "Reconciler: start to sync state" Mar 12 01:35:41.846237 kubelet[2528]: I0312 01:35:41.846168 2528 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 12 01:35:41.848773 kubelet[2528]: I0312 01:35:41.845553 2528 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 12 01:35:41.863696 kubelet[2528]: I0312 01:35:41.862909 2528 factory.go:223] Registration of the containerd container factory successfully Mar 12 01:35:41.865024 kubelet[2528]: E0312 01:35:41.864351 2528 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 12 01:35:41.888320 kubelet[2528]: I0312 01:35:41.888096 2528 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 12 01:35:41.895110 kubelet[2528]: I0312 01:35:41.895010 2528 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 12 01:35:41.895110 kubelet[2528]: I0312 01:35:41.895098 2528 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 12 01:35:41.895323 kubelet[2528]: I0312 01:35:41.895129 2528 kubelet.go:2428] "Starting kubelet main sync loop" Mar 12 01:35:41.895323 kubelet[2528]: E0312 01:35:41.895195 2528 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 12 01:35:41.933693 kubelet[2528]: I0312 01:35:41.932513 2528 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 12 01:35:41.933693 kubelet[2528]: I0312 01:35:41.932531 2528 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 12 01:35:41.933693 kubelet[2528]: I0312 01:35:41.932548 2528 state_mem.go:36] "Initialized new in-memory state store" Mar 12 01:35:41.933693 kubelet[2528]: I0312 01:35:41.932664 2528 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 12 01:35:41.933693 kubelet[2528]: I0312 01:35:41.932674 2528 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 12 01:35:41.933693 kubelet[2528]: I0312 01:35:41.932698 2528 policy_none.go:49] "None policy: Start" Mar 12 01:35:41.933693 kubelet[2528]: I0312 01:35:41.932712 2528 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 12 01:35:41.933693 kubelet[2528]: I0312 01:35:41.932727 2528 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 12 01:35:41.933693 kubelet[2528]: I0312 01:35:41.932849 2528 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 12 01:35:41.933693 kubelet[2528]: I0312 01:35:41.932862 2528 policy_none.go:47] "Start" Mar 12 01:35:41.942229 kubelet[2528]: E0312 01:35:41.942206 2528 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 12 01:35:41.942731 kubelet[2528]: I0312 01:35:41.942711 2528 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 12 01:35:41.943327 kubelet[2528]: I0312 01:35:41.943004 2528 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 12 01:35:41.943811 kubelet[2528]: I0312 01:35:41.943775 2528 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 12 01:35:41.948723 kubelet[2528]: E0312 01:35:41.948694 2528 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 12 01:35:41.996898 kubelet[2528]: I0312 01:35:41.996776 2528 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 12 01:35:41.998005 kubelet[2528]: I0312 01:35:41.997250 2528 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 12 01:35:41.998005 kubelet[2528]: I0312 01:35:41.997499 2528 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 12 01:35:42.077097 kubelet[2528]: I0312 01:35:42.077018 2528 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 12 01:35:42.099798 kubelet[2528]: I0312 01:35:42.099432 2528 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 12 01:35:42.099798 kubelet[2528]: I0312 01:35:42.099558 2528 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 12 01:35:42.149188 kubelet[2528]: I0312 01:35:42.148689 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d170600fb5d3f4055cac98ea8e370ac0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d170600fb5d3f4055cac98ea8e370ac0\") " pod="kube-system/kube-apiserver-localhost" Mar 12 01:35:42.149188 kubelet[2528]: I0312 01:35:42.148772 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d170600fb5d3f4055cac98ea8e370ac0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d170600fb5d3f4055cac98ea8e370ac0\") " pod="kube-system/kube-apiserver-localhost" Mar 12 01:35:42.149188 kubelet[2528]: I0312 01:35:42.148801 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:35:42.149188 kubelet[2528]: I0312 01:35:42.148827 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:35:42.149188 kubelet[2528]: I0312 01:35:42.148855 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:35:42.149616 kubelet[2528]: I0312 01:35:42.148903 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d170600fb5d3f4055cac98ea8e370ac0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d170600fb5d3f4055cac98ea8e370ac0\") " pod="kube-system/kube-apiserver-localhost" Mar 12 01:35:42.149616 kubelet[2528]: I0312 01:35:42.148931 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:35:42.149616 kubelet[2528]: I0312 01:35:42.148954 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:35:42.149616 kubelet[2528]: I0312 01:35:42.148989 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 12 01:35:42.309504 kubelet[2528]: E0312 01:35:42.309429 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:35:42.309504 kubelet[2528]: E0312 01:35:42.309839 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:35:42.312890 kubelet[2528]: E0312 01:35:42.312614 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:35:42.793323 kubelet[2528]: I0312 01:35:42.790904 2528 apiserver.go:52] "Watching apiserver" Mar 12 01:35:42.849940 kubelet[2528]: I0312 01:35:42.849753 2528 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 12 01:35:42.917242 kubelet[2528]: I0312 01:35:42.917169 2528 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 12 01:35:42.924212 kubelet[2528]: I0312 01:35:42.922683 2528 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 12 01:35:42.924212 kubelet[2528]: I0312 01:35:42.923246 2528 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 12 01:35:43.025683 kubelet[2528]: E0312 01:35:43.025095 2528 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 12 01:35:43.025683 kubelet[2528]: E0312 01:35:43.025378 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:35:43.026871 kubelet[2528]: E0312 01:35:43.026470 2528 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 12 01:35:43.026871 kubelet[2528]: E0312 01:35:43.026515 2528 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 12 01:35:43.026871 kubelet[2528]: E0312 01:35:43.026763 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:35:43.027200 kubelet[2528]: E0312 01:35:43.027157 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:35:43.126237 kubelet[2528]: I0312 01:35:43.126076 2528 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.1260587 podStartE2EDuration="2.1260587s" podCreationTimestamp="2026-03-12 01:35:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:35:43.089482426 +0000 UTC m=+1.409328207" watchObservedRunningTime="2026-03-12 01:35:43.1260587 +0000 UTC m=+1.445904470" Mar 12 01:35:43.126595 kubelet[2528]: I0312 01:35:43.126507 2528 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.126501015 podStartE2EDuration="1.126501015s" podCreationTimestamp="2026-03-12 01:35:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:35:43.126483332 +0000 UTC m=+1.446329103" watchObservedRunningTime="2026-03-12 01:35:43.126501015 +0000 UTC m=+1.446346787" Mar 12 01:35:43.173349 kubelet[2528]: I0312 01:35:43.173167 2528 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.1731479870000001 podStartE2EDuration="1.173147987s" podCreationTimestamp="2026-03-12 01:35:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:35:43.150550884 +0000 UTC m=+1.470396686" watchObservedRunningTime="2026-03-12 01:35:43.173147987 +0000 UTC m=+1.492993768" Mar 12 01:35:43.920581 kubelet[2528]: E0312 01:35:43.919375 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:35:43.920581 kubelet[2528]: E0312 01:35:43.920007 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:35:43.920581 kubelet[2528]: E0312 01:35:43.920453 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:35:44.668342 update_engine[1442]: I20260312 01:35:44.668200 1442 update_attempter.cc:509] Updating boot flags... Mar 12 01:35:44.705381 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2589) Mar 12 01:35:44.785921 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2593) Mar 12 01:35:44.828850 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2593) Mar 12 01:35:45.649683 kubelet[2528]: E0312 01:35:45.648255 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:35:45.794776 kubelet[2528]: I0312 01:35:45.794706 2528 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 12 01:35:45.795552 containerd[1454]: time="2026-03-12T01:35:45.795466355Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 12 01:35:45.796406 kubelet[2528]: I0312 01:35:45.795844 2528 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 12 01:35:45.925220 kubelet[2528]: E0312 01:35:45.925003 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:35:46.783078 systemd[1]: Created slice kubepods-besteffort-podae871d41_c0f9_4a88_8fcc_2b1783765bef.slice - libcontainer container kubepods-besteffort-podae871d41_c0f9_4a88_8fcc_2b1783765bef.slice. Mar 12 01:35:46.857930 kubelet[2528]: I0312 01:35:46.857814 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ae871d41-c0f9-4a88-8fcc-2b1783765bef-kube-proxy\") pod \"kube-proxy-xdm4c\" (UID: \"ae871d41-c0f9-4a88-8fcc-2b1783765bef\") " pod="kube-system/kube-proxy-xdm4c" Mar 12 01:35:46.858514 kubelet[2528]: I0312 01:35:46.857941 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae871d41-c0f9-4a88-8fcc-2b1783765bef-xtables-lock\") pod \"kube-proxy-xdm4c\" (UID: \"ae871d41-c0f9-4a88-8fcc-2b1783765bef\") " pod="kube-system/kube-proxy-xdm4c" Mar 12 01:35:46.858514 kubelet[2528]: I0312 01:35:46.857980 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae871d41-c0f9-4a88-8fcc-2b1783765bef-lib-modules\") pod \"kube-proxy-xdm4c\" (UID: \"ae871d41-c0f9-4a88-8fcc-2b1783765bef\") " pod="kube-system/kube-proxy-xdm4c" Mar 12 01:35:46.858514 kubelet[2528]: I0312 01:35:46.858003 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crmr9\" (UniqueName: \"kubernetes.io/projected/ae871d41-c0f9-4a88-8fcc-2b1783765bef-kube-api-access-crmr9\") pod \"kube-proxy-xdm4c\" (UID: \"ae871d41-c0f9-4a88-8fcc-2b1783765bef\") " pod="kube-system/kube-proxy-xdm4c" Mar 12 01:35:46.971905 systemd[1]: Created slice kubepods-besteffort-pod7719a490_9251_4628_a7c2_927ec439428c.slice - libcontainer container kubepods-besteffort-pod7719a490_9251_4628_a7c2_927ec439428c.slice. Mar 12 01:35:47.059981 kubelet[2528]: I0312 01:35:47.059638 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7719a490-9251-4628-a7c2-927ec439428c-var-lib-calico\") pod \"tigera-operator-5588576f44-r2r9v\" (UID: \"7719a490-9251-4628-a7c2-927ec439428c\") " pod="tigera-operator/tigera-operator-5588576f44-r2r9v" Mar 12 01:35:47.059981 kubelet[2528]: I0312 01:35:47.059745 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqw8j\" (UniqueName: \"kubernetes.io/projected/7719a490-9251-4628-a7c2-927ec439428c-kube-api-access-fqw8j\") pod \"tigera-operator-5588576f44-r2r9v\" (UID: \"7719a490-9251-4628-a7c2-927ec439428c\") " pod="tigera-operator/tigera-operator-5588576f44-r2r9v" Mar 12 01:35:47.096909 kubelet[2528]: E0312 01:35:47.096819 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:35:47.099167 containerd[1454]: time="2026-03-12T01:35:47.099103886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xdm4c,Uid:ae871d41-c0f9-4a88-8fcc-2b1783765bef,Namespace:kube-system,Attempt:0,}" Mar 12 01:35:47.161368 containerd[1454]: time="2026-03-12T01:35:47.160973168Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:35:47.161368 containerd[1454]: time="2026-03-12T01:35:47.161101247Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:35:47.161368 containerd[1454]: time="2026-03-12T01:35:47.161123930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:35:47.162802 containerd[1454]: time="2026-03-12T01:35:47.162674173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:35:47.256558 kubelet[2528]: E0312 01:35:47.253533 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:35:47.281011 containerd[1454]: time="2026-03-12T01:35:47.280975568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-r2r9v,Uid:7719a490-9251-4628-a7c2-927ec439428c,Namespace:tigera-operator,Attempt:0,}" Mar 12 01:35:47.285570 systemd[1]: Started cri-containerd-749db1ddc73a8da213e4074c5142d272392946e3b2bab7fee061842b54d54e40.scope - libcontainer container 749db1ddc73a8da213e4074c5142d272392946e3b2bab7fee061842b54d54e40. Mar 12 01:35:47.384238 containerd[1454]: time="2026-03-12T01:35:47.383643754Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:35:47.384238 containerd[1454]: time="2026-03-12T01:35:47.383739823Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:35:47.384238 containerd[1454]: time="2026-03-12T01:35:47.383766414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:35:47.384585 containerd[1454]: time="2026-03-12T01:35:47.384550421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xdm4c,Uid:ae871d41-c0f9-4a88-8fcc-2b1783765bef,Namespace:kube-system,Attempt:0,} returns sandbox id \"749db1ddc73a8da213e4074c5142d272392946e3b2bab7fee061842b54d54e40\"" Mar 12 01:35:47.385168 containerd[1454]: time="2026-03-12T01:35:47.384409563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:35:47.386328 kubelet[2528]: E0312 01:35:47.386180 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:35:47.401588 containerd[1454]: time="2026-03-12T01:35:47.400651726Z" level=info msg="CreateContainer within sandbox \"749db1ddc73a8da213e4074c5142d272392946e3b2bab7fee061842b54d54e40\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 12 01:35:47.426583 systemd[1]: Started cri-containerd-d6a15b0ea09c66f39ea12c9ca060893382c6d46e37eec080615d6654716f3e56.scope - libcontainer container d6a15b0ea09c66f39ea12c9ca060893382c6d46e37eec080615d6654716f3e56. Mar 12 01:35:47.434563 containerd[1454]: time="2026-03-12T01:35:47.434507291Z" level=info msg="CreateContainer within sandbox \"749db1ddc73a8da213e4074c5142d272392946e3b2bab7fee061842b54d54e40\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0ebcafda979a28b9a644c759f6abefd152f94434aa83a9caceb64aa490e70803\"" Mar 12 01:35:47.435934 containerd[1454]: time="2026-03-12T01:35:47.435902716Z" level=info msg="StartContainer for \"0ebcafda979a28b9a644c759f6abefd152f94434aa83a9caceb64aa490e70803\"" Mar 12 01:35:47.606644 systemd[1]: Started cri-containerd-0ebcafda979a28b9a644c759f6abefd152f94434aa83a9caceb64aa490e70803.scope - libcontainer container 0ebcafda979a28b9a644c759f6abefd152f94434aa83a9caceb64aa490e70803. Mar 12 01:35:47.624987 containerd[1454]: time="2026-03-12T01:35:47.624934620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-r2r9v,Uid:7719a490-9251-4628-a7c2-927ec439428c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d6a15b0ea09c66f39ea12c9ca060893382c6d46e37eec080615d6654716f3e56\"" Mar 12 01:35:47.628501 containerd[1454]: time="2026-03-12T01:35:47.628445865Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 12 01:35:47.689358 containerd[1454]: time="2026-03-12T01:35:47.688574425Z" level=info msg="StartContainer for \"0ebcafda979a28b9a644c759f6abefd152f94434aa83a9caceb64aa490e70803\" returns successfully" Mar 12 01:35:47.935258 kubelet[2528]: E0312 01:35:47.935160 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:35:47.936123 kubelet[2528]: E0312 01:35:47.935760 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:35:47.979420 kubelet[2528]: I0312 01:35:47.978966 2528 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xdm4c" podStartSLOduration=1.978945148 podStartE2EDuration="1.978945148s" podCreationTimestamp="2026-03-12 01:35:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:35:47.978753282 +0000 UTC m=+6.298599113" watchObservedRunningTime="2026-03-12 01:35:47.978945148 +0000 UTC m=+6.298790920" Mar 12 01:35:48.536695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3730638488.mount: Deactivated successfully. Mar 12 01:35:49.648550 kubelet[2528]: E0312 01:35:49.607446 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:35:50.020691 kubelet[2528]: E0312 01:35:50.020449 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:35:51.033594 containerd[1454]: time="2026-03-12T01:35:51.033511157Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:35:51.037333 containerd[1454]: time="2026-03-12T01:35:51.035982860Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 12 01:35:51.039330 containerd[1454]: time="2026-03-12T01:35:51.038512912Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:35:51.042893 containerd[1454]: time="2026-03-12T01:35:51.042865246Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:35:51.045759 containerd[1454]: time="2026-03-12T01:35:51.045458536Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 3.416682925s" Mar 12 01:35:51.045877 containerd[1454]: time="2026-03-12T01:35:51.045855497Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 12 01:35:51.053315 containerd[1454]: time="2026-03-12T01:35:51.053193693Z" level=info msg="CreateContainer within sandbox \"d6a15b0ea09c66f39ea12c9ca060893382c6d46e37eec080615d6654716f3e56\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 12 01:35:51.078425 containerd[1454]: time="2026-03-12T01:35:51.078373962Z" level=info msg="CreateContainer within sandbox \"d6a15b0ea09c66f39ea12c9ca060893382c6d46e37eec080615d6654716f3e56\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ff248d85756da141293e76ad123fba07a99a8d315784f8cf7ba73fbb6fda1b44\"" Mar 12 01:35:51.079025 containerd[1454]: time="2026-03-12T01:35:51.078974222Z" level=info msg="StartContainer for \"ff248d85756da141293e76ad123fba07a99a8d315784f8cf7ba73fbb6fda1b44\"" Mar 12 01:35:51.128185 systemd[1]: run-containerd-runc-k8s.io-ff248d85756da141293e76ad123fba07a99a8d315784f8cf7ba73fbb6fda1b44-runc.euJiLl.mount: Deactivated successfully. Mar 12 01:35:51.148650 systemd[1]: Started cri-containerd-ff248d85756da141293e76ad123fba07a99a8d315784f8cf7ba73fbb6fda1b44.scope - libcontainer container ff248d85756da141293e76ad123fba07a99a8d315784f8cf7ba73fbb6fda1b44. Mar 12 01:35:51.280654 containerd[1454]: time="2026-03-12T01:35:51.280471874Z" level=info msg="StartContainer for \"ff248d85756da141293e76ad123fba07a99a8d315784f8cf7ba73fbb6fda1b44\" returns successfully" Mar 12 01:35:52.037558 kubelet[2528]: I0312 01:35:52.037480 2528 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-r2r9v" podStartSLOduration=2.616799466 podStartE2EDuration="6.037464382s" podCreationTimestamp="2026-03-12 01:35:46 +0000 UTC" firstStartedPulling="2026-03-12 01:35:47.62790687 +0000 UTC m=+5.947752641" lastFinishedPulling="2026-03-12 01:35:51.048571776 +0000 UTC m=+9.368417557" observedRunningTime="2026-03-12 01:35:52.037113758 +0000 UTC m=+10.356959539" watchObservedRunningTime="2026-03-12 01:35:52.037464382 +0000 UTC m=+10.357310153" Mar 12 01:35:58.707944 sudo[1635]: pam_unix(sudo:session): session closed for user root Mar 12 01:35:58.712509 sshd[1631]: pam_unix(sshd:session): session closed for user core Mar 12 01:35:58.720522 systemd[1]: sshd@6-10.0.0.111:22-10.0.0.1:51658.service: Deactivated successfully. Mar 12 01:35:58.721420 systemd-logind[1441]: Session 7 logged out. Waiting for processes to exit. Mar 12 01:35:58.723532 systemd[1]: session-7.scope: Deactivated successfully. Mar 12 01:35:58.723807 systemd[1]: session-7.scope: Consumed 9.291s CPU time, 161.0M memory peak, 0B memory swap peak. Mar 12 01:35:58.726129 systemd-logind[1441]: Removed session 7. Mar 12 01:36:01.539950 systemd[1]: Created slice kubepods-besteffort-podeba2a0b4_cf42_4d32_b536_65f8ce21f7bf.slice - libcontainer container kubepods-besteffort-podeba2a0b4_cf42_4d32_b536_65f8ce21f7bf.slice. Mar 12 01:36:01.606990 kubelet[2528]: I0312 01:36:01.606844 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eba2a0b4-cf42-4d32-b536-65f8ce21f7bf-tigera-ca-bundle\") pod \"calico-typha-c784ccdb7-l4v74\" (UID: \"eba2a0b4-cf42-4d32-b536-65f8ce21f7bf\") " pod="calico-system/calico-typha-c784ccdb7-l4v74" Mar 12 01:36:01.606990 kubelet[2528]: I0312 01:36:01.606944 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z2s7\" (UniqueName: \"kubernetes.io/projected/eba2a0b4-cf42-4d32-b536-65f8ce21f7bf-kube-api-access-7z2s7\") pod \"calico-typha-c784ccdb7-l4v74\" (UID: \"eba2a0b4-cf42-4d32-b536-65f8ce21f7bf\") " pod="calico-system/calico-typha-c784ccdb7-l4v74" Mar 12 01:36:01.606990 kubelet[2528]: I0312 01:36:01.606973 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/eba2a0b4-cf42-4d32-b536-65f8ce21f7bf-typha-certs\") pod \"calico-typha-c784ccdb7-l4v74\" (UID: \"eba2a0b4-cf42-4d32-b536-65f8ce21f7bf\") " pod="calico-system/calico-typha-c784ccdb7-l4v74" Mar 12 01:36:01.637978 systemd[1]: Created slice kubepods-besteffort-podee655f8c_22fc_4632_bf67_8ffaa13c1de2.slice - libcontainer container kubepods-besteffort-podee655f8c_22fc_4632_bf67_8ffaa13c1de2.slice. Mar 12 01:36:01.709378 kubelet[2528]: I0312 01:36:01.708382 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ee655f8c-22fc-4632-bf67-8ffaa13c1de2-cni-bin-dir\") pod \"calico-node-g88tn\" (UID: \"ee655f8c-22fc-4632-bf67-8ffaa13c1de2\") " pod="calico-system/calico-node-g88tn" Mar 12 01:36:01.709378 kubelet[2528]: I0312 01:36:01.708546 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee655f8c-22fc-4632-bf67-8ffaa13c1de2-tigera-ca-bundle\") pod \"calico-node-g88tn\" (UID: \"ee655f8c-22fc-4632-bf67-8ffaa13c1de2\") " pod="calico-system/calico-node-g88tn" Mar 12 01:36:01.709378 kubelet[2528]: I0312 01:36:01.708624 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ee655f8c-22fc-4632-bf67-8ffaa13c1de2-var-run-calico\") pod \"calico-node-g88tn\" (UID: \"ee655f8c-22fc-4632-bf67-8ffaa13c1de2\") " pod="calico-system/calico-node-g88tn" Mar 12 01:36:01.709378 kubelet[2528]: I0312 01:36:01.708667 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ee655f8c-22fc-4632-bf67-8ffaa13c1de2-lib-modules\") pod \"calico-node-g88tn\" (UID: \"ee655f8c-22fc-4632-bf67-8ffaa13c1de2\") " pod="calico-system/calico-node-g88tn" Mar 12 01:36:01.709378 kubelet[2528]: I0312 01:36:01.708691 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ee655f8c-22fc-4632-bf67-8ffaa13c1de2-policysync\") pod \"calico-node-g88tn\" (UID: \"ee655f8c-22fc-4632-bf67-8ffaa13c1de2\") " pod="calico-system/calico-node-g88tn" Mar 12 01:36:01.709679 kubelet[2528]: I0312 01:36:01.708724 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ee655f8c-22fc-4632-bf67-8ffaa13c1de2-var-lib-calico\") pod \"calico-node-g88tn\" (UID: \"ee655f8c-22fc-4632-bf67-8ffaa13c1de2\") " pod="calico-system/calico-node-g88tn" Mar 12 01:36:01.709679 kubelet[2528]: I0312 01:36:01.708749 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ee655f8c-22fc-4632-bf67-8ffaa13c1de2-xtables-lock\") pod \"calico-node-g88tn\" (UID: \"ee655f8c-22fc-4632-bf67-8ffaa13c1de2\") " pod="calico-system/calico-node-g88tn" Mar 12 01:36:01.709679 kubelet[2528]: I0312 01:36:01.708772 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4wrz\" (UniqueName: \"kubernetes.io/projected/ee655f8c-22fc-4632-bf67-8ffaa13c1de2-kube-api-access-n4wrz\") pod \"calico-node-g88tn\" (UID: \"ee655f8c-22fc-4632-bf67-8ffaa13c1de2\") " pod="calico-system/calico-node-g88tn" Mar 12 01:36:01.709679 kubelet[2528]: I0312 01:36:01.708797 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ee655f8c-22fc-4632-bf67-8ffaa13c1de2-cni-net-dir\") pod \"calico-node-g88tn\" (UID: \"ee655f8c-22fc-4632-bf67-8ffaa13c1de2\") " pod="calico-system/calico-node-g88tn" Mar 12 01:36:01.709679 kubelet[2528]: I0312 01:36:01.708818 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ee655f8c-22fc-4632-bf67-8ffaa13c1de2-node-certs\") pod \"calico-node-g88tn\" (UID: \"ee655f8c-22fc-4632-bf67-8ffaa13c1de2\") " pod="calico-system/calico-node-g88tn" Mar 12 01:36:01.709853 kubelet[2528]: I0312 01:36:01.708855 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/ee655f8c-22fc-4632-bf67-8ffaa13c1de2-nodeproc\") pod \"calico-node-g88tn\" (UID: \"ee655f8c-22fc-4632-bf67-8ffaa13c1de2\") " pod="calico-system/calico-node-g88tn" Mar 12 01:36:01.709853 kubelet[2528]: I0312 01:36:01.708879 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/ee655f8c-22fc-4632-bf67-8ffaa13c1de2-sys-fs\") pod \"calico-node-g88tn\" (UID: \"ee655f8c-22fc-4632-bf67-8ffaa13c1de2\") " pod="calico-system/calico-node-g88tn" Mar 12 01:36:01.709853 kubelet[2528]: I0312 01:36:01.708920 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/ee655f8c-22fc-4632-bf67-8ffaa13c1de2-bpffs\") pod \"calico-node-g88tn\" (UID: \"ee655f8c-22fc-4632-bf67-8ffaa13c1de2\") " pod="calico-system/calico-node-g88tn" Mar 12 01:36:01.709853 kubelet[2528]: I0312 01:36:01.708980 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ee655f8c-22fc-4632-bf67-8ffaa13c1de2-cni-log-dir\") pod \"calico-node-g88tn\" (UID: \"ee655f8c-22fc-4632-bf67-8ffaa13c1de2\") " pod="calico-system/calico-node-g88tn" Mar 12 01:36:01.709853 kubelet[2528]: I0312 01:36:01.709007 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ee655f8c-22fc-4632-bf67-8ffaa13c1de2-flexvol-driver-host\") pod \"calico-node-g88tn\" (UID: \"ee655f8c-22fc-4632-bf67-8ffaa13c1de2\") " pod="calico-system/calico-node-g88tn" Mar 12 01:36:01.763792 kubelet[2528]: E0312 01:36:01.763685 2528 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fz78r" podUID="e41b2407-7aa6-4ede-8904-d6670e550c53" Mar 12 01:36:01.819710 kubelet[2528]: E0312 01:36:01.818233 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:36:01.819710 kubelet[2528]: W0312 01:36:01.818357 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:36:01.819710 kubelet[2528]: E0312 01:36:01.818384 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:36:01.831867 kubelet[2528]: E0312 01:36:01.831767 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:36:01.831867 kubelet[2528]: W0312 01:36:01.831793 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:36:01.831867 kubelet[2528]: E0312 01:36:01.831819 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:36:01.848635 kubelet[2528]: E0312 01:36:01.848475 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:36:01.851003 containerd[1454]: time="2026-03-12T01:36:01.850599023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-c784ccdb7-l4v74,Uid:eba2a0b4-cf42-4d32-b536-65f8ce21f7bf,Namespace:calico-system,Attempt:0,}" Mar 12 01:36:01.906180 containerd[1454]: time="2026-03-12T01:36:01.905972238Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:36:01.906180 containerd[1454]: time="2026-03-12T01:36:01.906127969Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:36:01.906180 containerd[1454]: time="2026-03-12T01:36:01.906154488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:36:01.907033 containerd[1454]: time="2026-03-12T01:36:01.906535460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:36:01.911887 kubelet[2528]: E0312 01:36:01.911822 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:36:01.911887 kubelet[2528]: W0312 01:36:01.911863 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:36:01.912022 kubelet[2528]: E0312 01:36:01.911889 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:36:01.912427 kubelet[2528]: I0312 01:36:01.911968 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e41b2407-7aa6-4ede-8904-d6670e550c53-kubelet-dir\") pod \"csi-node-driver-fz78r\" (UID: \"e41b2407-7aa6-4ede-8904-d6670e550c53\") " pod="calico-system/csi-node-driver-fz78r" Mar 12 01:36:01.912611 kubelet[2528]: E0312 01:36:01.912548 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:36:01.912611 kubelet[2528]: W0312 01:36:01.912567 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:36:01.912611 kubelet[2528]: E0312 01:36:01.912582 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:36:01.913187 kubelet[2528]: E0312 01:36:01.913134 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:36:01.913249 kubelet[2528]: W0312 01:36:01.913216 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:36:01.913249 kubelet[2528]: E0312 01:36:01.913233 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:36:01.913941 kubelet[2528]: E0312 01:36:01.913859 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:36:01.913941 kubelet[2528]: W0312 01:36:01.913891 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:36:01.913941 kubelet[2528]: E0312 01:36:01.913905 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:36:01.913941 kubelet[2528]: I0312 01:36:01.913930 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gnfz\" (UniqueName: \"kubernetes.io/projected/e41b2407-7aa6-4ede-8904-d6670e550c53-kube-api-access-5gnfz\") pod \"csi-node-driver-fz78r\" (UID: \"e41b2407-7aa6-4ede-8904-d6670e550c53\") " pod="calico-system/csi-node-driver-fz78r" Mar 12 01:36:01.914505 kubelet[2528]: E0312 01:36:01.914477 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:36:01.914569 kubelet[2528]: W0312 01:36:01.914507 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:36:01.914595 kubelet[2528]: E0312 01:36:01.914573 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:36:01.914664 kubelet[2528]: I0312 01:36:01.914633 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e41b2407-7aa6-4ede-8904-d6670e550c53-varrun\") pod \"csi-node-driver-fz78r\" (UID: \"e41b2407-7aa6-4ede-8904-d6670e550c53\") " pod="calico-system/csi-node-driver-fz78r" Mar 12 01:36:01.915168 kubelet[2528]: E0312 01:36:01.915139 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:36:01.915206 kubelet[2528]: W0312 01:36:01.915169 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:36:01.915206 kubelet[2528]: E0312 01:36:01.915185 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:36:01.915718 kubelet[2528]: E0312 01:36:01.915678 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:36:01.915805 kubelet[2528]: W0312 01:36:01.915780 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:36:01.915942 kubelet[2528]: E0312 01:36:01.915900 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:36:01.916655 kubelet[2528]: E0312 01:36:01.916609 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:36:01.916655 kubelet[2528]: W0312 01:36:01.916642 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:36:01.916745 kubelet[2528]: E0312 01:36:01.916658 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:36:01.916745 kubelet[2528]: I0312 01:36:01.916690 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e41b2407-7aa6-4ede-8904-d6670e550c53-registration-dir\") pod \"csi-node-driver-fz78r\" (UID: \"e41b2407-7aa6-4ede-8904-d6670e550c53\") " pod="calico-system/csi-node-driver-fz78r" Mar 12 01:36:01.917211 kubelet[2528]: E0312 01:36:01.917162 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:36:01.917312 kubelet[2528]: W0312 01:36:01.917248 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:36:01.917378 kubelet[2528]: E0312 01:36:01.917351 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:36:01.917601 kubelet[2528]: I0312 01:36:01.917547 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e41b2407-7aa6-4ede-8904-d6670e550c53-socket-dir\") pod \"csi-node-driver-fz78r\" (UID: \"e41b2407-7aa6-4ede-8904-d6670e550c53\") " pod="calico-system/csi-node-driver-fz78r" Mar 12 01:36:01.918396 kubelet[2528]: E0312 01:36:01.918259 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:36:01.918396 kubelet[2528]: W0312 01:36:01.918366 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:36:01.918490 kubelet[2528]: E0312 01:36:01.918381 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:36:01.918849 kubelet[2528]: E0312 01:36:01.918816 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:36:01.918849 kubelet[2528]: W0312 01:36:01.918843 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:36:01.918912 kubelet[2528]: E0312 01:36:01.918858 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:36:01.919321 kubelet[2528]: E0312 01:36:01.919230 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:36:01.919354 kubelet[2528]: W0312 01:36:01.919341 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:36:01.919383 kubelet[2528]: E0312 01:36:01.919357 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:36:01.919960 kubelet[2528]: E0312 01:36:01.919900 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:36:01.920030 kubelet[2528]: W0312 01:36:01.919934 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:36:01.920030 kubelet[2528]: E0312 01:36:01.919989 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:36:01.920768 kubelet[2528]: E0312 01:36:01.920695 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:36:01.920973 kubelet[2528]: W0312 01:36:01.920901 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:36:01.920973 kubelet[2528]: E0312 01:36:01.920947 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:36:01.921561 kubelet[2528]: E0312 01:36:01.921520 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:36:01.921561 kubelet[2528]: W0312 01:36:01.921535 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:36:01.921561 kubelet[2528]: E0312 01:36:01.921550 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:36:01.935510 systemd[1]: Started cri-containerd-f4ea41d9553c7e99f51420442dd61332544bb47474e9491e4be069d97088dd60.scope - libcontainer container f4ea41d9553c7e99f51420442dd61332544bb47474e9491e4be069d97088dd60. Mar 12 01:36:01.948493 containerd[1454]: time="2026-03-12T01:36:01.948428921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-g88tn,Uid:ee655f8c-22fc-4632-bf67-8ffaa13c1de2,Namespace:calico-system,Attempt:0,}" Mar 12 01:36:01.991199 containerd[1454]: time="2026-03-12T01:36:01.990748260Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:36:01.991199 containerd[1454]: time="2026-03-12T01:36:01.990838709Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:36:01.991199 containerd[1454]: time="2026-03-12T01:36:01.990849039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:36:01.991199 containerd[1454]: time="2026-03-12T01:36:01.990964996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:36:02.005526 containerd[1454]: time="2026-03-12T01:36:02.005424197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-c784ccdb7-l4v74,Uid:eba2a0b4-cf42-4d32-b536-65f8ce21f7bf,Namespace:calico-system,Attempt:0,} returns sandbox id \"f4ea41d9553c7e99f51420442dd61332544bb47474e9491e4be069d97088dd60\"" Mar 12 01:36:02.007014 kubelet[2528]: E0312 01:36:02.006957 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:36:02.008230 containerd[1454]: time="2026-03-12T01:36:02.008128063Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 12 01:36:02.020832 kubelet[2528]: E0312 01:36:02.020572 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:36:02.020832 kubelet[2528]: W0312 01:36:02.020720 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:36:02.020832 kubelet[2528]: E0312 01:36:02.020738 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:36:02.021135 kubelet[2528]: E0312 01:36:02.021100 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:36:02.021135 kubelet[2528]: W0312 01:36:02.021131 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:36:02.021249 kubelet[2528]: E0312 01:36:02.021146 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:36:02.022862 kubelet[2528]: E0312 01:36:02.022776 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:36:02.022862 kubelet[2528]: W0312 01:36:02.022807 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:36:02.022862 kubelet[2528]: E0312 01:36:02.022818 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:36:02.023655 kubelet[2528]: E0312 01:36:02.023565 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:36:02.023655 kubelet[2528]: W0312 01:36:02.023603 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:36:02.023655 kubelet[2528]: E0312 01:36:02.023620 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:36:02.024007 kubelet[2528]: E0312 01:36:02.023985 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:36:02.024007 kubelet[2528]: W0312 01:36:02.024003 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:36:02.024152 kubelet[2528]: E0312 01:36:02.024020 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:36:02.024465 kubelet[2528]: E0312 01:36:02.024428 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:36:02.024465 kubelet[2528]: W0312 01:36:02.024440 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:36:02.024465 kubelet[2528]: E0312 01:36:02.024451 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:36:02.025347 kubelet[2528]: E0312 01:36:02.025236 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:36:02.025347 kubelet[2528]: W0312 01:36:02.025303 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:36:02.025347 kubelet[2528]: E0312 01:36:02.025316 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:36:02.025619 kubelet[2528]: E0312 01:36:02.025585 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:36:02.025619 kubelet[2528]: W0312 01:36:02.025604 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:36:02.025619 kubelet[2528]: E0312 01:36:02.025618 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:36:02.026026 kubelet[2528]: E0312 01:36:02.025988 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:36:02.026026 kubelet[2528]: W0312 01:36:02.026022 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:36:02.026157 kubelet[2528]: E0312 01:36:02.026037 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:36:02.026629 kubelet[2528]: E0312 01:36:02.026555 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:36:02.026629 kubelet[2528]: W0312 01:36:02.026587 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:36:02.026629 kubelet[2528]: E0312 01:36:02.026602 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:36:02.026941 kubelet[2528]: E0312 01:36:02.026907 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:36:02.026941 kubelet[2528]: W0312 01:36:02.026932 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:36:02.026941 kubelet[2528]: E0312 01:36:02.026944 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:36:02.027372 kubelet[2528]: E0312 01:36:02.027341 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:36:02.027372 kubelet[2528]: W0312 01:36:02.027365 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:36:02.027458 kubelet[2528]: E0312 01:36:02.027382 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:36:02.028227 kubelet[2528]: E0312 01:36:02.028175 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:36:02.028474 systemd[1]: Started cri-containerd-bb39506a172802f7e672a0807d3ad0ae509666d0d9c3b14a02d08be97b52862b.scope - libcontainer container bb39506a172802f7e672a0807d3ad0ae509666d0d9c3b14a02d08be97b52862b. Mar 12 01:36:02.028846 kubelet[2528]: W0312 01:36:02.028751 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:36:02.028846 kubelet[2528]: E0312 01:36:02.028818 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:36:02.029922 kubelet[2528]: E0312 01:36:02.029791 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:36:02.029922 kubelet[2528]: W0312 01:36:02.029843 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:36:02.029922 kubelet[2528]: E0312 01:36:02.029855 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:36:02.030416 kubelet[2528]: E0312 01:36:02.030350 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:36:02.030416 kubelet[2528]: W0312 01:36:02.030361 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:36:02.030416 kubelet[2528]: E0312 01:36:02.030374 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:36:02.031132 kubelet[2528]: E0312 01:36:02.030860 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:36:02.031132 kubelet[2528]: W0312 01:36:02.031015 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:36:02.031132 kubelet[2528]: E0312 01:36:02.031034 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:36:02.032505 kubelet[2528]: E0312 01:36:02.032435 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:36:02.032505 kubelet[2528]: W0312 01:36:02.032446 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:36:02.032766 kubelet[2528]: E0312 01:36:02.032615 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:36:02.033821 kubelet[2528]: E0312 01:36:02.033606 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:36:02.033821 kubelet[2528]: W0312 01:36:02.033618 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:36:02.033821 kubelet[2528]: E0312 01:36:02.033628 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:36:02.034033 kubelet[2528]: E0312 01:36:02.034014 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:36:02.034339 kubelet[2528]: W0312 01:36:02.034120 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:36:02.034339 kubelet[2528]: E0312 01:36:02.034137 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:36:02.034637 kubelet[2528]: E0312 01:36:02.034622 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:36:02.034811 kubelet[2528]: W0312 01:36:02.034689 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:36:02.034811 kubelet[2528]: E0312 01:36:02.034703 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:36:02.035453 kubelet[2528]: E0312 01:36:02.035106 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:36:02.035453 kubelet[2528]: W0312 01:36:02.035135 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:36:02.035453 kubelet[2528]: E0312 01:36:02.035162 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:36:02.035979 kubelet[2528]: E0312 01:36:02.035955 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:36:02.036142 kubelet[2528]: W0312 01:36:02.036127 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:36:02.036389 kubelet[2528]: E0312 01:36:02.036377 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:36:02.037318 kubelet[2528]: E0312 01:36:02.037300 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:36:02.037542 kubelet[2528]: W0312 01:36:02.037402 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:36:02.037542 kubelet[2528]: E0312 01:36:02.037425 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:36:02.038136 kubelet[2528]: E0312 01:36:02.038028 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:36:02.038136 kubelet[2528]: W0312 01:36:02.038069 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:36:02.038136 kubelet[2528]: E0312 01:36:02.038081 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:36:02.038697 kubelet[2528]: E0312 01:36:02.038543 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:36:02.038697 kubelet[2528]: W0312 01:36:02.038555 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:36:02.038697 kubelet[2528]: E0312 01:36:02.038566 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:36:02.046312 kubelet[2528]: E0312 01:36:02.043798 2528 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:36:02.046312 kubelet[2528]: W0312 01:36:02.043817 2528 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:36:02.046312 kubelet[2528]: E0312 01:36:02.043837 2528 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:36:02.066368 containerd[1454]: time="2026-03-12T01:36:02.066330818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-g88tn,Uid:ee655f8c-22fc-4632-bf67-8ffaa13c1de2,Namespace:calico-system,Attempt:0,} returns sandbox id \"bb39506a172802f7e672a0807d3ad0ae509666d0d9c3b14a02d08be97b52862b\"" Mar 12 01:36:03.042347 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3303812710.mount: Deactivated successfully. Mar 12 01:36:03.914739 kubelet[2528]: E0312 01:36:03.914645 2528 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fz78r" podUID="e41b2407-7aa6-4ede-8904-d6670e550c53" Mar 12 01:36:06.240101 kubelet[2528]: E0312 01:36:06.234201 2528 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fz78r" podUID="e41b2407-7aa6-4ede-8904-d6670e550c53" Mar 12 01:36:07.591565 containerd[1454]: time="2026-03-12T01:36:07.590834032Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:36:07.593770 containerd[1454]: time="2026-03-12T01:36:07.593443385Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Mar 12 01:36:07.595852 containerd[1454]: time="2026-03-12T01:36:07.595751351Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:36:07.600558 containerd[1454]: time="2026-03-12T01:36:07.600487811Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:36:07.602928 containerd[1454]: time="2026-03-12T01:36:07.602801590Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 5.594591564s" Mar 12 01:36:07.602928 containerd[1454]: time="2026-03-12T01:36:07.602909372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 12 01:36:07.610321 containerd[1454]: time="2026-03-12T01:36:07.608545724Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 12 01:36:07.641443 containerd[1454]: time="2026-03-12T01:36:07.641136289Z" level=info msg="CreateContainer within sandbox \"f4ea41d9553c7e99f51420442dd61332544bb47474e9491e4be069d97088dd60\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 12 01:36:07.690606 containerd[1454]: time="2026-03-12T01:36:07.690508304Z" level=info msg="CreateContainer within sandbox \"f4ea41d9553c7e99f51420442dd61332544bb47474e9491e4be069d97088dd60\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a8256f8bb740f7fd167892712eefd046af874656e5ae9b6d121e0a996978cc1a\"" Mar 12 01:36:07.693385 containerd[1454]: time="2026-03-12T01:36:07.691533101Z" level=info msg="StartContainer for \"a8256f8bb740f7fd167892712eefd046af874656e5ae9b6d121e0a996978cc1a\"" Mar 12 01:36:07.756254 systemd[1]: Started cri-containerd-a8256f8bb740f7fd167892712eefd046af874656e5ae9b6d121e0a996978cc1a.scope - libcontainer container a8256f8bb740f7fd167892712eefd046af874656e5ae9b6d121e0a996978cc1a. Mar 12 01:36:07.830041 containerd[1454]: time="2026-03-12T01:36:07.829838633Z" level=info msg="StartContainer for \"a8256f8bb740f7fd167892712eefd046af874656e5ae9b6d121e0a996978cc1a\" returns successfully" Mar 12 01:36:07.898648 kubelet[2528]: E0312 01:36:07.898469 2528 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fz78r" podUID="e41b2407-7aa6-4ede-8904-d6670e550c53" Mar 12 01:36:08.223622 containerd[1454]: time="2026-03-12T01:36:08.223038070Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:36:08.227041 containerd[1454]: time="2026-03-12T01:36:08.226916795Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Mar 12 01:36:08.228617 containerd[1454]: time="2026-03-12T01:36:08.228462942Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:36:08.232471 containerd[1454]: time="2026-03-12T01:36:08.232397238Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:36:08.235791 containerd[1454]: time="2026-03-12T01:36:08.233813377Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 625.198203ms" Mar 12 01:36:08.235791 containerd[1454]: time="2026-03-12T01:36:08.234958164Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 12 01:36:08.247315 containerd[1454]: time="2026-03-12T01:36:08.247132700Z" level=info msg="CreateContainer within sandbox \"bb39506a172802f7e672a0807d3ad0ae509666d0d9c3b14a02d08be97b52862b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 12 01:36:08.373583 containerd[1454]: time="2026-03-12T01:36:08.373427025Z" level=info msg="CreateContainer within sandbox \"bb39506a172802f7e672a0807d3ad0ae509666d0d9c3b14a02d08be97b52862b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ad863ca01bf5392fea3ad370e2a9c51d0d91bb243c473a481464a5aa288b0068\"" Mar 12 01:36:08.376845 containerd[1454]: time="2026-03-12T01:36:08.376791686Z" level=info msg="StartContainer for \"ad863ca01bf5392fea3ad370e2a9c51d0d91bb243c473a481464a5aa288b0068\"" Mar 12 01:36:08.462329 systemd[1]: Started cri-containerd-ad863ca01bf5392fea3ad370e2a9c51d0d91bb243c473a481464a5aa288b0068.scope - libcontainer container ad863ca01bf5392fea3ad370e2a9c51d0d91bb243c473a481464a5aa288b0068. Mar 12 01:36:08.526170 containerd[1454]: time="2026-03-12T01:36:08.526000724Z" level=info msg="StartContainer for \"ad863ca01bf5392fea3ad370e2a9c51d0d91bb243c473a481464a5aa288b0068\" returns successfully" Mar 12 01:36:08.568085 systemd[1]: cri-containerd-ad863ca01bf5392fea3ad370e2a9c51d0d91bb243c473a481464a5aa288b0068.scope: Deactivated successfully. Mar 12 01:36:08.574341 kubelet[2528]: E0312 01:36:08.574221 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:36:08.622329 kubelet[2528]: I0312 01:36:08.619404 2528 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-c784ccdb7-l4v74" podStartSLOduration=2.019526517 podStartE2EDuration="7.61938255s" podCreationTimestamp="2026-03-12 01:36:01 +0000 UTC" firstStartedPulling="2026-03-12 01:36:02.007800202 +0000 UTC m=+20.327645973" lastFinishedPulling="2026-03-12 01:36:07.607656225 +0000 UTC m=+25.927502006" observedRunningTime="2026-03-12 01:36:08.617509981 +0000 UTC m=+26.937355752" watchObservedRunningTime="2026-03-12 01:36:08.61938255 +0000 UTC m=+26.939228321" Mar 12 01:36:08.670670 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad863ca01bf5392fea3ad370e2a9c51d0d91bb243c473a481464a5aa288b0068-rootfs.mount: Deactivated successfully. Mar 12 01:36:08.731532 containerd[1454]: time="2026-03-12T01:36:08.731397714Z" level=info msg="shim disconnected" id=ad863ca01bf5392fea3ad370e2a9c51d0d91bb243c473a481464a5aa288b0068 namespace=k8s.io Mar 12 01:36:08.732188 containerd[1454]: time="2026-03-12T01:36:08.731550159Z" level=warning msg="cleaning up after shim disconnected" id=ad863ca01bf5392fea3ad370e2a9c51d0d91bb243c473a481464a5aa288b0068 namespace=k8s.io Mar 12 01:36:08.732188 containerd[1454]: time="2026-03-12T01:36:08.731563584Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 01:36:08.790389 containerd[1454]: time="2026-03-12T01:36:08.790220494Z" level=warning msg="cleanup warnings time=\"2026-03-12T01:36:08Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 12 01:36:09.604228 containerd[1454]: time="2026-03-12T01:36:09.604117425Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 12 01:36:09.604940 kubelet[2528]: I0312 01:36:09.604480 2528 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 01:36:09.604940 kubelet[2528]: E0312 01:36:09.604831 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:36:09.897538 kubelet[2528]: E0312 01:36:09.897225 2528 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fz78r" podUID="e41b2407-7aa6-4ede-8904-d6670e550c53" Mar 12 01:36:11.899552 kubelet[2528]: E0312 01:36:11.897353 2528 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fz78r" podUID="e41b2407-7aa6-4ede-8904-d6670e550c53" Mar 12 01:36:13.902364 kubelet[2528]: E0312 01:36:13.902158 2528 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fz78r" podUID="e41b2407-7aa6-4ede-8904-d6670e550c53" Mar 12 01:36:15.901688 kubelet[2528]: E0312 01:36:15.901636 2528 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fz78r" podUID="e41b2407-7aa6-4ede-8904-d6670e550c53" Mar 12 01:36:17.018975 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount688471294.mount: Deactivated successfully. Mar 12 01:36:17.376677 containerd[1454]: time="2026-03-12T01:36:17.376585016Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:36:17.378690 containerd[1454]: time="2026-03-12T01:36:17.377989668Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 12 01:36:17.409301 containerd[1454]: time="2026-03-12T01:36:17.409015176Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:36:17.412934 containerd[1454]: time="2026-03-12T01:36:17.412847272Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:36:17.415162 containerd[1454]: time="2026-03-12T01:36:17.414175860Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 7.809984526s" Mar 12 01:36:17.415240 containerd[1454]: time="2026-03-12T01:36:17.415165369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 12 01:36:17.424278 containerd[1454]: time="2026-03-12T01:36:17.424162749Z" level=info msg="CreateContainer within sandbox \"bb39506a172802f7e672a0807d3ad0ae509666d0d9c3b14a02d08be97b52862b\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 12 01:36:17.516649 containerd[1454]: time="2026-03-12T01:36:17.516535265Z" level=info msg="CreateContainer within sandbox \"bb39506a172802f7e672a0807d3ad0ae509666d0d9c3b14a02d08be97b52862b\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"3c2f0048af13786ba681a336e81bc44b1681eaa698940a77b1c4db249c825fff\"" Mar 12 01:36:17.519384 containerd[1454]: time="2026-03-12T01:36:17.517616409Z" level=info msg="StartContainer for \"3c2f0048af13786ba681a336e81bc44b1681eaa698940a77b1c4db249c825fff\"" Mar 12 01:36:17.598552 systemd[1]: Started cri-containerd-3c2f0048af13786ba681a336e81bc44b1681eaa698940a77b1c4db249c825fff.scope - libcontainer container 3c2f0048af13786ba681a336e81bc44b1681eaa698940a77b1c4db249c825fff. Mar 12 01:36:17.687550 containerd[1454]: time="2026-03-12T01:36:17.687427176Z" level=info msg="StartContainer for \"3c2f0048af13786ba681a336e81bc44b1681eaa698940a77b1c4db249c825fff\" returns successfully" Mar 12 01:36:17.784221 systemd[1]: cri-containerd-3c2f0048af13786ba681a336e81bc44b1681eaa698940a77b1c4db249c825fff.scope: Deactivated successfully. Mar 12 01:36:17.851138 containerd[1454]: time="2026-03-12T01:36:17.850936437Z" level=info msg="shim disconnected" id=3c2f0048af13786ba681a336e81bc44b1681eaa698940a77b1c4db249c825fff namespace=k8s.io Mar 12 01:36:17.851138 containerd[1454]: time="2026-03-12T01:36:17.851014252Z" level=warning msg="cleaning up after shim disconnected" id=3c2f0048af13786ba681a336e81bc44b1681eaa698940a77b1c4db249c825fff namespace=k8s.io Mar 12 01:36:17.851138 containerd[1454]: time="2026-03-12T01:36:17.851027587Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 01:36:17.896574 kubelet[2528]: E0312 01:36:17.896470 2528 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fz78r" podUID="e41b2407-7aa6-4ede-8904-d6670e550c53" Mar 12 01:36:18.018008 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c2f0048af13786ba681a336e81bc44b1681eaa698940a77b1c4db249c825fff-rootfs.mount: Deactivated successfully. Mar 12 01:36:18.637517 containerd[1454]: time="2026-03-12T01:36:18.637159146Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 12 01:36:19.698541 kubelet[2528]: I0312 01:36:19.698392 2528 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 01:36:19.699187 kubelet[2528]: E0312 01:36:19.699057 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:36:19.902586 kubelet[2528]: E0312 01:36:19.902158 2528 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fz78r" podUID="e41b2407-7aa6-4ede-8904-d6670e550c53" Mar 12 01:36:20.642353 kubelet[2528]: E0312 01:36:20.641011 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:36:21.045223 containerd[1454]: time="2026-03-12T01:36:21.045147693Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:36:21.046967 containerd[1454]: time="2026-03-12T01:36:21.046871033Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 12 01:36:21.048585 containerd[1454]: time="2026-03-12T01:36:21.048485559Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:36:21.052426 containerd[1454]: time="2026-03-12T01:36:21.052042820Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:36:21.056112 containerd[1454]: time="2026-03-12T01:36:21.055060231Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 2.417854276s" Mar 12 01:36:21.056112 containerd[1454]: time="2026-03-12T01:36:21.055217485Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 12 01:36:21.071421 containerd[1454]: time="2026-03-12T01:36:21.071179771Z" level=info msg="CreateContainer within sandbox \"bb39506a172802f7e672a0807d3ad0ae509666d0d9c3b14a02d08be97b52862b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 12 01:36:21.115789 containerd[1454]: time="2026-03-12T01:36:21.115690660Z" level=info msg="CreateContainer within sandbox \"bb39506a172802f7e672a0807d3ad0ae509666d0d9c3b14a02d08be97b52862b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9ac89bb7a043540403af7b10ca154af1d4fc98e2b0083c52ec8517ea1776abc1\"" Mar 12 01:36:21.117032 containerd[1454]: time="2026-03-12T01:36:21.116915654Z" level=info msg="StartContainer for \"9ac89bb7a043540403af7b10ca154af1d4fc98e2b0083c52ec8517ea1776abc1\"" Mar 12 01:36:21.209449 systemd[1]: Started cri-containerd-9ac89bb7a043540403af7b10ca154af1d4fc98e2b0083c52ec8517ea1776abc1.scope - libcontainer container 9ac89bb7a043540403af7b10ca154af1d4fc98e2b0083c52ec8517ea1776abc1. Mar 12 01:36:21.292795 containerd[1454]: time="2026-03-12T01:36:21.292716878Z" level=info msg="StartContainer for \"9ac89bb7a043540403af7b10ca154af1d4fc98e2b0083c52ec8517ea1776abc1\" returns successfully" Mar 12 01:36:21.900842 kubelet[2528]: E0312 01:36:21.900711 2528 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fz78r" podUID="e41b2407-7aa6-4ede-8904-d6670e550c53" Mar 12 01:36:22.718351 systemd[1]: cri-containerd-9ac89bb7a043540403af7b10ca154af1d4fc98e2b0083c52ec8517ea1776abc1.scope: Deactivated successfully. Mar 12 01:36:22.719013 systemd[1]: cri-containerd-9ac89bb7a043540403af7b10ca154af1d4fc98e2b0083c52ec8517ea1776abc1.scope: Consumed 1.098s CPU time. Mar 12 01:36:22.781904 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9ac89bb7a043540403af7b10ca154af1d4fc98e2b0083c52ec8517ea1776abc1-rootfs.mount: Deactivated successfully. Mar 12 01:36:22.795214 containerd[1454]: time="2026-03-12T01:36:22.792813723Z" level=info msg="shim disconnected" id=9ac89bb7a043540403af7b10ca154af1d4fc98e2b0083c52ec8517ea1776abc1 namespace=k8s.io Mar 12 01:36:22.795214 containerd[1454]: time="2026-03-12T01:36:22.792942444Z" level=warning msg="cleaning up after shim disconnected" id=9ac89bb7a043540403af7b10ca154af1d4fc98e2b0083c52ec8517ea1776abc1 namespace=k8s.io Mar 12 01:36:22.795214 containerd[1454]: time="2026-03-12T01:36:22.792958413Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 01:36:22.805578 kubelet[2528]: I0312 01:36:22.805541 2528 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 12 01:36:22.984785 systemd[1]: Created slice kubepods-burstable-pod87896f19_89d0_488e_a664_14de866626f3.slice - libcontainer container kubepods-burstable-pod87896f19_89d0_488e_a664_14de866626f3.slice. Mar 12 01:36:23.008176 systemd[1]: Created slice kubepods-besteffort-poddd73580a_b13b_41aa_8b3e_da326a7dc9c7.slice - libcontainer container kubepods-besteffort-poddd73580a_b13b_41aa_8b3e_da326a7dc9c7.slice. Mar 12 01:36:23.024344 kubelet[2528]: I0312 01:36:23.024308 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw5v2\" (UniqueName: \"kubernetes.io/projected/dd73580a-b13b-41aa-8b3e-da326a7dc9c7-kube-api-access-zw5v2\") pod \"calico-kube-controllers-7cf4cfd8c5-kpffb\" (UID: \"dd73580a-b13b-41aa-8b3e-da326a7dc9c7\") " pod="calico-system/calico-kube-controllers-7cf4cfd8c5-kpffb" Mar 12 01:36:23.025642 kubelet[2528]: I0312 01:36:23.024873 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/503f8cf5-b92a-411c-8353-481b71d6c97f-config-volume\") pod \"coredns-66bc5c9577-bc54q\" (UID: \"503f8cf5-b92a-411c-8353-481b71d6c97f\") " pod="kube-system/coredns-66bc5c9577-bc54q" Mar 12 01:36:23.025642 kubelet[2528]: I0312 01:36:23.024929 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd73580a-b13b-41aa-8b3e-da326a7dc9c7-tigera-ca-bundle\") pod \"calico-kube-controllers-7cf4cfd8c5-kpffb\" (UID: \"dd73580a-b13b-41aa-8b3e-da326a7dc9c7\") " pod="calico-system/calico-kube-controllers-7cf4cfd8c5-kpffb" Mar 12 01:36:23.025642 kubelet[2528]: I0312 01:36:23.024952 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5999426b-6640-4a93-bb5d-5d94700e760d-whisker-ca-bundle\") pod \"whisker-58846f54f5-7rsmh\" (UID: \"5999426b-6640-4a93-bb5d-5d94700e760d\") " pod="calico-system/whisker-58846f54f5-7rsmh" Mar 12 01:36:23.025642 kubelet[2528]: I0312 01:36:23.024979 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v62l\" (UniqueName: \"kubernetes.io/projected/5999426b-6640-4a93-bb5d-5d94700e760d-kube-api-access-6v62l\") pod \"whisker-58846f54f5-7rsmh\" (UID: \"5999426b-6640-4a93-bb5d-5d94700e760d\") " pod="calico-system/whisker-58846f54f5-7rsmh" Mar 12 01:36:23.025642 kubelet[2528]: I0312 01:36:23.025032 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87896f19-89d0-488e-a664-14de866626f3-config-volume\") pod \"coredns-66bc5c9577-977fw\" (UID: \"87896f19-89d0-488e-a664-14de866626f3\") " pod="kube-system/coredns-66bc5c9577-977fw" Mar 12 01:36:23.025867 kubelet[2528]: I0312 01:36:23.025057 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8fe93df9-5176-42c7-b8e3-6176eea7ca40-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-jqpbf\" (UID: \"8fe93df9-5176-42c7-b8e3-6176eea7ca40\") " pod="calico-system/goldmane-cccfbd5cf-jqpbf" Mar 12 01:36:23.025867 kubelet[2528]: I0312 01:36:23.025118 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5htdj\" (UniqueName: \"kubernetes.io/projected/503f8cf5-b92a-411c-8353-481b71d6c97f-kube-api-access-5htdj\") pod \"coredns-66bc5c9577-bc54q\" (UID: \"503f8cf5-b92a-411c-8353-481b71d6c97f\") " pod="kube-system/coredns-66bc5c9577-bc54q" Mar 12 01:36:23.025867 kubelet[2528]: I0312 01:36:23.025147 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f00bf4e6-320f-408f-93a5-5bdfb046e6a2-calico-apiserver-certs\") pod \"calico-apiserver-5fcfc6547b-67zph\" (UID: \"f00bf4e6-320f-408f-93a5-5bdfb046e6a2\") " pod="calico-system/calico-apiserver-5fcfc6547b-67zph" Mar 12 01:36:23.025867 kubelet[2528]: I0312 01:36:23.025169 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxwpv\" (UniqueName: \"kubernetes.io/projected/f00bf4e6-320f-408f-93a5-5bdfb046e6a2-kube-api-access-nxwpv\") pod \"calico-apiserver-5fcfc6547b-67zph\" (UID: \"f00bf4e6-320f-408f-93a5-5bdfb046e6a2\") " pod="calico-system/calico-apiserver-5fcfc6547b-67zph" Mar 12 01:36:23.025867 kubelet[2528]: I0312 01:36:23.025227 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cc45d57a-2d65-4471-a944-16cc99da2325-calico-apiserver-certs\") pod \"calico-apiserver-5fcfc6547b-f9sbm\" (UID: \"cc45d57a-2d65-4471-a944-16cc99da2325\") " pod="calico-system/calico-apiserver-5fcfc6547b-f9sbm" Mar 12 01:36:23.026096 kubelet[2528]: I0312 01:36:23.025249 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fe93df9-5176-42c7-b8e3-6176eea7ca40-config\") pod \"goldmane-cccfbd5cf-jqpbf\" (UID: \"8fe93df9-5176-42c7-b8e3-6176eea7ca40\") " pod="calico-system/goldmane-cccfbd5cf-jqpbf" Mar 12 01:36:23.026096 kubelet[2528]: I0312 01:36:23.025323 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f72f5\" (UniqueName: \"kubernetes.io/projected/87896f19-89d0-488e-a664-14de866626f3-kube-api-access-f72f5\") pod \"coredns-66bc5c9577-977fw\" (UID: \"87896f19-89d0-488e-a664-14de866626f3\") " pod="kube-system/coredns-66bc5c9577-977fw" Mar 12 01:36:23.026096 kubelet[2528]: I0312 01:36:23.025352 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnmvz\" (UniqueName: \"kubernetes.io/projected/cc45d57a-2d65-4471-a944-16cc99da2325-kube-api-access-rnmvz\") pod \"calico-apiserver-5fcfc6547b-f9sbm\" (UID: \"cc45d57a-2d65-4471-a944-16cc99da2325\") " pod="calico-system/calico-apiserver-5fcfc6547b-f9sbm" Mar 12 01:36:23.026096 kubelet[2528]: I0312 01:36:23.025388 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/8fe93df9-5176-42c7-b8e3-6176eea7ca40-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-jqpbf\" (UID: \"8fe93df9-5176-42c7-b8e3-6176eea7ca40\") " pod="calico-system/goldmane-cccfbd5cf-jqpbf" Mar 12 01:36:23.026096 kubelet[2528]: I0312 01:36:23.025408 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tnm5\" (UniqueName: \"kubernetes.io/projected/8fe93df9-5176-42c7-b8e3-6176eea7ca40-kube-api-access-7tnm5\") pod \"goldmane-cccfbd5cf-jqpbf\" (UID: \"8fe93df9-5176-42c7-b8e3-6176eea7ca40\") " pod="calico-system/goldmane-cccfbd5cf-jqpbf" Mar 12 01:36:23.026360 kubelet[2528]: I0312 01:36:23.025431 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/5999426b-6640-4a93-bb5d-5d94700e760d-nginx-config\") pod \"whisker-58846f54f5-7rsmh\" (UID: \"5999426b-6640-4a93-bb5d-5d94700e760d\") " pod="calico-system/whisker-58846f54f5-7rsmh" Mar 12 01:36:23.026360 kubelet[2528]: I0312 01:36:23.025483 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5999426b-6640-4a93-bb5d-5d94700e760d-whisker-backend-key-pair\") pod \"whisker-58846f54f5-7rsmh\" (UID: \"5999426b-6640-4a93-bb5d-5d94700e760d\") " pod="calico-system/whisker-58846f54f5-7rsmh" Mar 12 01:36:23.048743 systemd[1]: Created slice kubepods-besteffort-podcc45d57a_2d65_4471_a944_16cc99da2325.slice - libcontainer container kubepods-besteffort-podcc45d57a_2d65_4471_a944_16cc99da2325.slice. Mar 12 01:36:23.069809 systemd[1]: Created slice kubepods-besteffort-podf00bf4e6_320f_408f_93a5_5bdfb046e6a2.slice - libcontainer container kubepods-besteffort-podf00bf4e6_320f_408f_93a5_5bdfb046e6a2.slice. Mar 12 01:36:23.086801 systemd[1]: Created slice kubepods-burstable-pod503f8cf5_b92a_411c_8353_481b71d6c97f.slice - libcontainer container kubepods-burstable-pod503f8cf5_b92a_411c_8353_481b71d6c97f.slice. Mar 12 01:36:23.102854 systemd[1]: Created slice kubepods-besteffort-pod8fe93df9_5176_42c7_b8e3_6176eea7ca40.slice - libcontainer container kubepods-besteffort-pod8fe93df9_5176_42c7_b8e3_6176eea7ca40.slice. Mar 12 01:36:23.111415 systemd[1]: Created slice kubepods-besteffort-pod5999426b_6640_4a93_bb5d_5d94700e760d.slice - libcontainer container kubepods-besteffort-pod5999426b_6640_4a93_bb5d_5d94700e760d.slice. Mar 12 01:36:23.305703 kubelet[2528]: E0312 01:36:23.305245 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:36:23.308702 containerd[1454]: time="2026-03-12T01:36:23.308536158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-977fw,Uid:87896f19-89d0-488e-a664-14de866626f3,Namespace:kube-system,Attempt:0,}" Mar 12 01:36:23.320120 containerd[1454]: time="2026-03-12T01:36:23.319979940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cf4cfd8c5-kpffb,Uid:dd73580a-b13b-41aa-8b3e-da326a7dc9c7,Namespace:calico-system,Attempt:0,}" Mar 12 01:36:23.385531 containerd[1454]: time="2026-03-12T01:36:23.384733487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fcfc6547b-f9sbm,Uid:cc45d57a-2d65-4471-a944-16cc99da2325,Namespace:calico-system,Attempt:0,}" Mar 12 01:36:23.400049 containerd[1454]: time="2026-03-12T01:36:23.399965426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fcfc6547b-67zph,Uid:f00bf4e6-320f-408f-93a5-5bdfb046e6a2,Namespace:calico-system,Attempt:0,}" Mar 12 01:36:23.405487 kubelet[2528]: E0312 01:36:23.405384 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:36:23.406585 containerd[1454]: time="2026-03-12T01:36:23.406507766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-bc54q,Uid:503f8cf5-b92a-411c-8353-481b71d6c97f,Namespace:kube-system,Attempt:0,}" Mar 12 01:36:23.418409 containerd[1454]: time="2026-03-12T01:36:23.418175917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-jqpbf,Uid:8fe93df9-5176-42c7-b8e3-6176eea7ca40,Namespace:calico-system,Attempt:0,}" Mar 12 01:36:23.428837 containerd[1454]: time="2026-03-12T01:36:23.428583519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58846f54f5-7rsmh,Uid:5999426b-6640-4a93-bb5d-5d94700e760d,Namespace:calico-system,Attempt:0,}" Mar 12 01:36:23.681354 containerd[1454]: time="2026-03-12T01:36:23.680210093Z" level=error msg="Failed to destroy network for sandbox \"de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:36:23.681983 containerd[1454]: time="2026-03-12T01:36:23.681563260Z" level=error msg="encountered an error cleaning up failed sandbox \"de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:36:23.681983 containerd[1454]: time="2026-03-12T01:36:23.681884499Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-977fw,Uid:87896f19-89d0-488e-a664-14de866626f3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:36:23.683650 containerd[1454]: time="2026-03-12T01:36:23.683585133Z" level=error msg="Failed to destroy network for sandbox \"8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:36:23.685864 containerd[1454]: time="2026-03-12T01:36:23.685817060Z" level=error msg="encountered an error cleaning up failed sandbox \"8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:36:23.686387 containerd[1454]: time="2026-03-12T01:36:23.686347901Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cf4cfd8c5-kpffb,Uid:dd73580a-b13b-41aa-8b3e-da326a7dc9c7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:36:23.710791 kubelet[2528]: E0312 01:36:23.709405 2528 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:36:23.710791 kubelet[2528]: E0312 01:36:23.709497 2528 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7cf4cfd8c5-kpffb" Mar 12 01:36:23.710791 kubelet[2528]: E0312 01:36:23.709525 2528 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7cf4cfd8c5-kpffb" Mar 12 01:36:23.711142 kubelet[2528]: E0312 01:36:23.709594 2528 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7cf4cfd8c5-kpffb_calico-system(dd73580a-b13b-41aa-8b3e-da326a7dc9c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7cf4cfd8c5-kpffb_calico-system(dd73580a-b13b-41aa-8b3e-da326a7dc9c7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7cf4cfd8c5-kpffb" podUID="dd73580a-b13b-41aa-8b3e-da326a7dc9c7" Mar 12 01:36:23.711142 kubelet[2528]: E0312 01:36:23.710153 2528 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:36:23.711142 kubelet[2528]: E0312 01:36:23.710227 2528 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-977fw" Mar 12 01:36:23.711411 kubelet[2528]: E0312 01:36:23.710316 2528 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-977fw" Mar 12 01:36:23.711411 kubelet[2528]: E0312 01:36:23.710631 2528 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-977fw_kube-system(87896f19-89d0-488e-a664-14de866626f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-977fw_kube-system(87896f19-89d0-488e-a664-14de866626f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-977fw" podUID="87896f19-89d0-488e-a664-14de866626f3" Mar 12 01:36:23.722538 containerd[1454]: time="2026-03-12T01:36:23.722482716Z" level=error msg="Failed to destroy network for sandbox \"3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:36:23.723495 containerd[1454]: time="2026-03-12T01:36:23.723245770Z" level=error msg="encountered an error cleaning up failed sandbox \"3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:36:23.724829 containerd[1454]: time="2026-03-12T01:36:23.724731852Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fcfc6547b-f9sbm,Uid:cc45d57a-2d65-4471-a944-16cc99da2325,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:36:23.725842 kubelet[2528]: E0312 01:36:23.725396 2528 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:36:23.725842 kubelet[2528]: E0312 01:36:23.725459 2528 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5fcfc6547b-f9sbm" Mar 12 01:36:23.725842 kubelet[2528]: E0312 01:36:23.725486 2528 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5fcfc6547b-f9sbm" Mar 12 01:36:23.726020 kubelet[2528]: E0312 01:36:23.725543 2528 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5fcfc6547b-f9sbm_calico-system(cc45d57a-2d65-4471-a944-16cc99da2325)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5fcfc6547b-f9sbm_calico-system(cc45d57a-2d65-4471-a944-16cc99da2325)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5fcfc6547b-f9sbm" podUID="cc45d57a-2d65-4471-a944-16cc99da2325" Mar 12 01:36:23.744470 kubelet[2528]: I0312 01:36:23.744376 2528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6" Mar 12 01:36:23.751859 containerd[1454]: time="2026-03-12T01:36:23.751804476Z" level=error msg="Failed to destroy network for sandbox \"5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:36:23.757225 containerd[1454]: time="2026-03-12T01:36:23.757013701Z" level=error msg="encountered an error cleaning up failed sandbox \"5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:36:23.757225 containerd[1454]: time="2026-03-12T01:36:23.757145928Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58846f54f5-7rsmh,Uid:5999426b-6640-4a93-bb5d-5d94700e760d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:36:23.767215 containerd[1454]: time="2026-03-12T01:36:23.767188987Z" level=info msg="StopPodSandbox for \"de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6\"" Mar 12 01:36:23.767533 kubelet[2528]: E0312 01:36:23.767463 2528 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:36:23.767626 kubelet[2528]: E0312 01:36:23.767577 2528 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-58846f54f5-7rsmh" Mar 12 01:36:23.767626 kubelet[2528]: E0312 01:36:23.767598 2528 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-58846f54f5-7rsmh" Mar 12 01:36:23.769117 containerd[1454]: time="2026-03-12T01:36:23.769051604Z" level=info msg="Ensure that sandbox de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6 in task-service has been cleanup successfully" Mar 12 01:36:23.771108 kubelet[2528]: E0312 01:36:23.770442 2528 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-58846f54f5-7rsmh_calico-system(5999426b-6640-4a93-bb5d-5d94700e760d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-58846f54f5-7rsmh_calico-system(5999426b-6640-4a93-bb5d-5d94700e760d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-58846f54f5-7rsmh" podUID="5999426b-6640-4a93-bb5d-5d94700e760d" Mar 12 01:36:23.778472 kubelet[2528]: I0312 01:36:23.778435 2528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765" Mar 12 01:36:23.780131 containerd[1454]: time="2026-03-12T01:36:23.779795947Z" level=info msg="StopPodSandbox for \"3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765\"" Mar 12 01:36:23.780203 containerd[1454]: time="2026-03-12T01:36:23.780127726Z" level=info msg="Ensure that sandbox 3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765 in task-service has been cleanup successfully" Mar 12 01:36:23.799846 kubelet[2528]: I0312 01:36:23.799556 2528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6" Mar 12 01:36:23.803514 containerd[1454]: time="2026-03-12T01:36:23.803433305Z" level=info msg="StopPodSandbox for \"8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6\"" Mar 12 01:36:23.803942 containerd[1454]: time="2026-03-12T01:36:23.803829685Z" level=info msg="Ensure that sandbox 8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6 in task-service has been cleanup successfully" Mar 12 01:36:23.845876 containerd[1454]: time="2026-03-12T01:36:23.845589130Z" level=info msg="CreateContainer within sandbox \"bb39506a172802f7e672a0807d3ad0ae509666d0d9c3b14a02d08be97b52862b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 12 01:36:23.875032 containerd[1454]: time="2026-03-12T01:36:23.871547933Z" level=error msg="Failed to destroy network for sandbox \"f1b8aeebbead1ba57c94a6c124e3eec467e0f5cde492582bce733c53c256b528\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:36:23.879204 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f1b8aeebbead1ba57c94a6c124e3eec467e0f5cde492582bce733c53c256b528-shm.mount: Deactivated successfully. Mar 12 01:36:23.883157 containerd[1454]: time="2026-03-12T01:36:23.882833618Z" level=error msg="encountered an error cleaning up failed sandbox \"f1b8aeebbead1ba57c94a6c124e3eec467e0f5cde492582bce733c53c256b528\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:36:23.883157 containerd[1454]: time="2026-03-12T01:36:23.882918728Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fcfc6547b-67zph,Uid:f00bf4e6-320f-408f-93a5-5bdfb046e6a2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f1b8aeebbead1ba57c94a6c124e3eec467e0f5cde492582bce733c53c256b528\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:36:23.883157 containerd[1454]: time="2026-03-12T01:36:23.883040068Z" level=error msg="Failed to destroy network for sandbox \"fb1cd7440f10148d769f7cfb32edd3bae5a0dd31751fd65fcda0a737be129b62\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:36:23.886574 containerd[1454]: time="2026-03-12T01:36:23.883736048Z" level=error msg="encountered an error cleaning up failed sandbox \"fb1cd7440f10148d769f7cfb32edd3bae5a0dd31751fd65fcda0a737be129b62\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:36:23.886574 containerd[1454]: time="2026-03-12T01:36:23.883774880Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-jqpbf,Uid:8fe93df9-5176-42c7-b8e3-6176eea7ca40,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fb1cd7440f10148d769f7cfb32edd3bae5a0dd31751fd65fcda0a737be129b62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:36:23.886719 kubelet[2528]: E0312 01:36:23.883683 2528 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1b8aeebbead1ba57c94a6c124e3eec467e0f5cde492582bce733c53c256b528\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:36:23.886719 kubelet[2528]: E0312 01:36:23.883752 2528 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1b8aeebbead1ba57c94a6c124e3eec467e0f5cde492582bce733c53c256b528\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5fcfc6547b-67zph" Mar 12 01:36:23.886719 kubelet[2528]: E0312 01:36:23.883778 2528 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1b8aeebbead1ba57c94a6c124e3eec467e0f5cde492582bce733c53c256b528\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5fcfc6547b-67zph" Mar 12 01:36:23.886878 kubelet[2528]: E0312 01:36:23.883848 2528 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5fcfc6547b-67zph_calico-system(f00bf4e6-320f-408f-93a5-5bdfb046e6a2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5fcfc6547b-67zph_calico-system(f00bf4e6-320f-408f-93a5-5bdfb046e6a2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f1b8aeebbead1ba57c94a6c124e3eec467e0f5cde492582bce733c53c256b528\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5fcfc6547b-67zph" podUID="f00bf4e6-320f-408f-93a5-5bdfb046e6a2" Mar 12 01:36:23.886878 kubelet[2528]: E0312 01:36:23.883937 2528 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb1cd7440f10148d769f7cfb32edd3bae5a0dd31751fd65fcda0a737be129b62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:36:23.886878 kubelet[2528]: E0312 01:36:23.883967 2528 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb1cd7440f10148d769f7cfb32edd3bae5a0dd31751fd65fcda0a737be129b62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-jqpbf" Mar 12 01:36:23.887150 kubelet[2528]: E0312 01:36:23.884049 2528 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb1cd7440f10148d769f7cfb32edd3bae5a0dd31751fd65fcda0a737be129b62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-jqpbf" Mar 12 01:36:23.887150 kubelet[2528]: E0312 01:36:23.885894 2528 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-cccfbd5cf-jqpbf_calico-system(8fe93df9-5176-42c7-b8e3-6176eea7ca40)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-cccfbd5cf-jqpbf_calico-system(8fe93df9-5176-42c7-b8e3-6176eea7ca40)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fb1cd7440f10148d769f7cfb32edd3bae5a0dd31751fd65fcda0a737be129b62\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-jqpbf" podUID="8fe93df9-5176-42c7-b8e3-6176eea7ca40" Mar 12 01:36:23.891457 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fb1cd7440f10148d769f7cfb32edd3bae5a0dd31751fd65fcda0a737be129b62-shm.mount: Deactivated successfully. Mar 12 01:36:23.917990 containerd[1454]: time="2026-03-12T01:36:23.914758027Z" level=error msg="Failed to destroy network for sandbox \"b3dfbc640779c49c5300f08225213822843925be5961af2d0163c0588b789640\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:36:23.917990 containerd[1454]: time="2026-03-12T01:36:23.916146440Z" level=error msg="encountered an error cleaning up failed sandbox \"b3dfbc640779c49c5300f08225213822843925be5961af2d0163c0588b789640\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:36:23.917990 containerd[1454]: time="2026-03-12T01:36:23.916208465Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-bc54q,Uid:503f8cf5-b92a-411c-8353-481b71d6c97f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b3dfbc640779c49c5300f08225213822843925be5961af2d0163c0588b789640\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:36:23.917223 systemd[1]: Created slice kubepods-besteffort-pode41b2407_7aa6_4ede_8904_d6670e550c53.slice - libcontainer container kubepods-besteffort-pode41b2407_7aa6_4ede_8904_d6670e550c53.slice. Mar 12 01:36:23.922626 kubelet[2528]: E0312 01:36:23.921814 2528 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3dfbc640779c49c5300f08225213822843925be5961af2d0163c0588b789640\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:36:23.922626 kubelet[2528]: E0312 01:36:23.921885 2528 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3dfbc640779c49c5300f08225213822843925be5961af2d0163c0588b789640\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-bc54q" Mar 12 01:36:23.922626 kubelet[2528]: E0312 01:36:23.921913 2528 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3dfbc640779c49c5300f08225213822843925be5961af2d0163c0588b789640\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-bc54q" Mar 12 01:36:23.922875 kubelet[2528]: E0312 01:36:23.921974 2528 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-bc54q_kube-system(503f8cf5-b92a-411c-8353-481b71d6c97f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-bc54q_kube-system(503f8cf5-b92a-411c-8353-481b71d6c97f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b3dfbc640779c49c5300f08225213822843925be5961af2d0163c0588b789640\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-bc54q" podUID="503f8cf5-b92a-411c-8353-481b71d6c97f" Mar 12 01:36:23.922997 containerd[1454]: time="2026-03-12T01:36:23.922684576Z" level=error msg="StopPodSandbox for \"3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765\" failed" error="failed to destroy network for sandbox \"3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:36:23.923055 kubelet[2528]: E0312 01:36:23.923014 2528 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765" Mar 12 01:36:23.923222 kubelet[2528]: E0312 01:36:23.923120 2528 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765"} Mar 12 01:36:23.923330 kubelet[2528]: E0312 01:36:23.923243 2528 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cc45d57a-2d65-4471-a944-16cc99da2325\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 12 01:36:23.923896 kubelet[2528]: E0312 01:36:23.923348 2528 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cc45d57a-2d65-4471-a944-16cc99da2325\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5fcfc6547b-f9sbm" podUID="cc45d57a-2d65-4471-a944-16cc99da2325" Mar 12 01:36:23.929762 containerd[1454]: time="2026-03-12T01:36:23.929362978Z" level=info msg="CreateContainer within sandbox \"bb39506a172802f7e672a0807d3ad0ae509666d0d9c3b14a02d08be97b52862b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"352a0cf8a9d33dd82452304bf850b56c4e10f2e16f0e11d24acc39bbae19a9b0\"" Mar 12 01:36:23.931950 containerd[1454]: time="2026-03-12T01:36:23.931671721Z" level=info msg="StartContainer for \"352a0cf8a9d33dd82452304bf850b56c4e10f2e16f0e11d24acc39bbae19a9b0\"" Mar 12 01:36:23.939131 containerd[1454]: time="2026-03-12T01:36:23.938993943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fz78r,Uid:e41b2407-7aa6-4ede-8904-d6670e550c53,Namespace:calico-system,Attempt:0,}" Mar 12 01:36:23.949260 containerd[1454]: time="2026-03-12T01:36:23.948964639Z" level=error msg="StopPodSandbox for \"8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6\" failed" error="failed to destroy network for sandbox \"8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:36:23.949972 kubelet[2528]: E0312 01:36:23.949828 2528 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6" Mar 12 01:36:23.949972 kubelet[2528]: E0312 01:36:23.949897 2528 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6"} Mar 12 01:36:23.949972 kubelet[2528]: E0312 01:36:23.949932 2528 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dd73580a-b13b-41aa-8b3e-da326a7dc9c7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 12 01:36:23.949972 kubelet[2528]: E0312 01:36:23.949956 2528 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dd73580a-b13b-41aa-8b3e-da326a7dc9c7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7cf4cfd8c5-kpffb" podUID="dd73580a-b13b-41aa-8b3e-da326a7dc9c7" Mar 12 01:36:23.961026 containerd[1454]: time="2026-03-12T01:36:23.960967274Z" level=error msg="StopPodSandbox for \"de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6\" failed" error="failed to destroy network for sandbox \"de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:36:23.964949 kubelet[2528]: E0312 01:36:23.964843 2528 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6" Mar 12 01:36:23.965236 kubelet[2528]: E0312 01:36:23.965209 2528 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6"} Mar 12 01:36:23.965919 kubelet[2528]: E0312 01:36:23.965858 2528 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"87896f19-89d0-488e-a664-14de866626f3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 12 01:36:23.966176 kubelet[2528]: E0312 01:36:23.966145 2528 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"87896f19-89d0-488e-a664-14de866626f3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-977fw" podUID="87896f19-89d0-488e-a664-14de866626f3" Mar 12 01:36:23.999596 systemd[1]: Started cri-containerd-352a0cf8a9d33dd82452304bf850b56c4e10f2e16f0e11d24acc39bbae19a9b0.scope - libcontainer container 352a0cf8a9d33dd82452304bf850b56c4e10f2e16f0e11d24acc39bbae19a9b0. Mar 12 01:36:24.073324 containerd[1454]: time="2026-03-12T01:36:24.071143664Z" level=info msg="StartContainer for \"352a0cf8a9d33dd82452304bf850b56c4e10f2e16f0e11d24acc39bbae19a9b0\" returns successfully" Mar 12 01:36:24.107008 containerd[1454]: time="2026-03-12T01:36:24.106916047Z" level=error msg="Failed to destroy network for sandbox \"0d44238109b6659d2dfeaefc535a9a18f3ecd064866ff982bce1a51eff8076a8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:36:24.114446 containerd[1454]: time="2026-03-12T01:36:24.110231421Z" level=error msg="encountered an error cleaning up failed sandbox \"0d44238109b6659d2dfeaefc535a9a18f3ecd064866ff982bce1a51eff8076a8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:36:24.114446 containerd[1454]: time="2026-03-12T01:36:24.111243220Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fz78r,Uid:e41b2407-7aa6-4ede-8904-d6670e550c53,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0d44238109b6659d2dfeaefc535a9a18f3ecd064866ff982bce1a51eff8076a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:36:24.126632 kubelet[2528]: E0312 01:36:24.124690 2528 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d44238109b6659d2dfeaefc535a9a18f3ecd064866ff982bce1a51eff8076a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:36:24.134817 kubelet[2528]: E0312 01:36:24.125808 2528 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d44238109b6659d2dfeaefc535a9a18f3ecd064866ff982bce1a51eff8076a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fz78r" Mar 12 01:36:24.134817 kubelet[2528]: E0312 01:36:24.131543 2528 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d44238109b6659d2dfeaefc535a9a18f3ecd064866ff982bce1a51eff8076a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fz78r" Mar 12 01:36:24.145530 kubelet[2528]: E0312 01:36:24.132413 2528 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fz78r_calico-system(e41b2407-7aa6-4ede-8904-d6670e550c53)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fz78r_calico-system(e41b2407-7aa6-4ede-8904-d6670e550c53)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d44238109b6659d2dfeaefc535a9a18f3ecd064866ff982bce1a51eff8076a8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fz78r" podUID="e41b2407-7aa6-4ede-8904-d6670e550c53" Mar 12 01:36:24.787758 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b3dfbc640779c49c5300f08225213822843925be5961af2d0163c0588b789640-shm.mount: Deactivated successfully. Mar 12 01:36:24.806848 kubelet[2528]: I0312 01:36:24.806798 2528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d" Mar 12 01:36:24.808441 kubelet[2528]: I0312 01:36:24.808414 2528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb1cd7440f10148d769f7cfb32edd3bae5a0dd31751fd65fcda0a737be129b62" Mar 12 01:36:24.809717 kubelet[2528]: I0312 01:36:24.809693 2528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3dfbc640779c49c5300f08225213822843925be5961af2d0163c0588b789640" Mar 12 01:36:24.810943 containerd[1454]: time="2026-03-12T01:36:24.810897423Z" level=info msg="StopPodSandbox for \"5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d\"" Mar 12 01:36:24.811552 containerd[1454]: time="2026-03-12T01:36:24.811190641Z" level=info msg="Ensure that sandbox 5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d in task-service has been cleanup successfully" Mar 12 01:36:24.812318 containerd[1454]: time="2026-03-12T01:36:24.811692357Z" level=info msg="StopPodSandbox for \"fb1cd7440f10148d769f7cfb32edd3bae5a0dd31751fd65fcda0a737be129b62\"" Mar 12 01:36:24.812318 containerd[1454]: time="2026-03-12T01:36:24.811914252Z" level=info msg="Ensure that sandbox fb1cd7440f10148d769f7cfb32edd3bae5a0dd31751fd65fcda0a737be129b62 in task-service has been cleanup successfully" Mar 12 01:36:24.812817 containerd[1454]: time="2026-03-12T01:36:24.812747816Z" level=info msg="StopPodSandbox for \"b3dfbc640779c49c5300f08225213822843925be5961af2d0163c0588b789640\"" Mar 12 01:36:24.813259 containerd[1454]: time="2026-03-12T01:36:24.813234965Z" level=info msg="Ensure that sandbox b3dfbc640779c49c5300f08225213822843925be5961af2d0163c0588b789640 in task-service has been cleanup successfully" Mar 12 01:36:24.850325 kubelet[2528]: I0312 01:36:24.850183 2528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d44238109b6659d2dfeaefc535a9a18f3ecd064866ff982bce1a51eff8076a8" Mar 12 01:36:24.852858 containerd[1454]: time="2026-03-12T01:36:24.851750847Z" level=info msg="StopPodSandbox for \"0d44238109b6659d2dfeaefc535a9a18f3ecd064866ff982bce1a51eff8076a8\"" Mar 12 01:36:24.855600 containerd[1454]: time="2026-03-12T01:36:24.855219662Z" level=info msg="Ensure that sandbox 0d44238109b6659d2dfeaefc535a9a18f3ecd064866ff982bce1a51eff8076a8 in task-service has been cleanup successfully" Mar 12 01:36:24.867330 kubelet[2528]: I0312 01:36:24.865389 2528 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1b8aeebbead1ba57c94a6c124e3eec467e0f5cde492582bce733c53c256b528" Mar 12 01:36:24.867505 containerd[1454]: time="2026-03-12T01:36:24.866772789Z" level=info msg="StopPodSandbox for \"f1b8aeebbead1ba57c94a6c124e3eec467e0f5cde492582bce733c53c256b528\"" Mar 12 01:36:24.867505 containerd[1454]: time="2026-03-12T01:36:24.867132841Z" level=info msg="Ensure that sandbox f1b8aeebbead1ba57c94a6c124e3eec467e0f5cde492582bce733c53c256b528 in task-service has been cleanup successfully" Mar 12 01:36:24.952123 systemd[1]: run-containerd-runc-k8s.io-352a0cf8a9d33dd82452304bf850b56c4e10f2e16f0e11d24acc39bbae19a9b0-runc.BX9MRF.mount: Deactivated successfully. Mar 12 01:36:25.108179 kubelet[2528]: I0312 01:36:25.107880 2528 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-g88tn" podStartSLOduration=5.119544823 podStartE2EDuration="24.107856486s" podCreationTimestamp="2026-03-12 01:36:01 +0000 UTC" firstStartedPulling="2026-03-12 01:36:02.069903478 +0000 UTC m=+20.389749249" lastFinishedPulling="2026-03-12 01:36:21.058215141 +0000 UTC m=+39.378060912" observedRunningTime="2026-03-12 01:36:24.879400155 +0000 UTC m=+43.199245955" watchObservedRunningTime="2026-03-12 01:36:25.107856486 +0000 UTC m=+43.427702257" Mar 12 01:36:25.294525 containerd[1454]: 2026-03-12 01:36:25.125 [INFO][3765] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d" Mar 12 01:36:25.294525 containerd[1454]: 2026-03-12 01:36:25.129 [INFO][3765] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d" iface="eth0" netns="/var/run/netns/cni-c133a569-555b-5159-bbaf-13ba91b58256" Mar 12 01:36:25.294525 containerd[1454]: 2026-03-12 01:36:25.132 [INFO][3765] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d" iface="eth0" netns="/var/run/netns/cni-c133a569-555b-5159-bbaf-13ba91b58256" Mar 12 01:36:25.294525 containerd[1454]: 2026-03-12 01:36:25.132 [INFO][3765] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d" iface="eth0" netns="/var/run/netns/cni-c133a569-555b-5159-bbaf-13ba91b58256" Mar 12 01:36:25.294525 containerd[1454]: 2026-03-12 01:36:25.132 [INFO][3765] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d" Mar 12 01:36:25.294525 containerd[1454]: 2026-03-12 01:36:25.132 [INFO][3765] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d" Mar 12 01:36:25.294525 containerd[1454]: 2026-03-12 01:36:25.238 [INFO][3862] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d" HandleID="k8s-pod-network.5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d" Workload="localhost-k8s-whisker--58846f54f5--7rsmh-eth0" Mar 12 01:36:25.294525 containerd[1454]: 2026-03-12 01:36:25.238 [INFO][3862] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:36:25.294525 containerd[1454]: 2026-03-12 01:36:25.238 [INFO][3862] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:36:25.294525 containerd[1454]: 2026-03-12 01:36:25.267 [WARNING][3862] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d" HandleID="k8s-pod-network.5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d" Workload="localhost-k8s-whisker--58846f54f5--7rsmh-eth0" Mar 12 01:36:25.294525 containerd[1454]: 2026-03-12 01:36:25.267 [INFO][3862] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d" HandleID="k8s-pod-network.5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d" Workload="localhost-k8s-whisker--58846f54f5--7rsmh-eth0" Mar 12 01:36:25.294525 containerd[1454]: 2026-03-12 01:36:25.278 [INFO][3862] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:36:25.294525 containerd[1454]: 2026-03-12 01:36:25.290 [INFO][3765] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d" Mar 12 01:36:25.298646 containerd[1454]: time="2026-03-12T01:36:25.298553478Z" level=info msg="TearDown network for sandbox \"5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d\" successfully" Mar 12 01:36:25.298802 containerd[1454]: time="2026-03-12T01:36:25.298777987Z" level=info msg="StopPodSandbox for \"5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d\" returns successfully" Mar 12 01:36:25.308370 systemd[1]: run-netns-cni\x2dc133a569\x2d555b\x2d5159\x2dbbaf\x2d13ba91b58256.mount: Deactivated successfully. Mar 12 01:36:25.321415 systemd[1]: run-netns-cni\x2d843ef65a\x2daff3\x2d7284\x2de532\x2d11610f6a7afc.mount: Deactivated successfully. Mar 12 01:36:25.322430 containerd[1454]: 2026-03-12 01:36:25.134 [INFO][3804] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f1b8aeebbead1ba57c94a6c124e3eec467e0f5cde492582bce733c53c256b528" Mar 12 01:36:25.322430 containerd[1454]: 2026-03-12 01:36:25.134 [INFO][3804] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f1b8aeebbead1ba57c94a6c124e3eec467e0f5cde492582bce733c53c256b528" iface="eth0" netns="/var/run/netns/cni-843ef65a-aff3-7284-e532-11610f6a7afc" Mar 12 01:36:25.322430 containerd[1454]: 2026-03-12 01:36:25.135 [INFO][3804] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f1b8aeebbead1ba57c94a6c124e3eec467e0f5cde492582bce733c53c256b528" iface="eth0" netns="/var/run/netns/cni-843ef65a-aff3-7284-e532-11610f6a7afc" Mar 12 01:36:25.322430 containerd[1454]: 2026-03-12 01:36:25.136 [INFO][3804] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f1b8aeebbead1ba57c94a6c124e3eec467e0f5cde492582bce733c53c256b528" iface="eth0" netns="/var/run/netns/cni-843ef65a-aff3-7284-e532-11610f6a7afc" Mar 12 01:36:25.322430 containerd[1454]: 2026-03-12 01:36:25.136 [INFO][3804] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f1b8aeebbead1ba57c94a6c124e3eec467e0f5cde492582bce733c53c256b528" Mar 12 01:36:25.322430 containerd[1454]: 2026-03-12 01:36:25.136 [INFO][3804] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f1b8aeebbead1ba57c94a6c124e3eec467e0f5cde492582bce733c53c256b528" Mar 12 01:36:25.322430 containerd[1454]: 2026-03-12 01:36:25.250 [INFO][3864] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f1b8aeebbead1ba57c94a6c124e3eec467e0f5cde492582bce733c53c256b528" HandleID="k8s-pod-network.f1b8aeebbead1ba57c94a6c124e3eec467e0f5cde492582bce733c53c256b528" Workload="localhost-k8s-calico--apiserver--5fcfc6547b--67zph-eth0" Mar 12 01:36:25.322430 containerd[1454]: 2026-03-12 01:36:25.251 [INFO][3864] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:36:25.322430 containerd[1454]: 2026-03-12 01:36:25.278 [INFO][3864] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:36:25.322430 containerd[1454]: 2026-03-12 01:36:25.293 [WARNING][3864] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f1b8aeebbead1ba57c94a6c124e3eec467e0f5cde492582bce733c53c256b528" HandleID="k8s-pod-network.f1b8aeebbead1ba57c94a6c124e3eec467e0f5cde492582bce733c53c256b528" Workload="localhost-k8s-calico--apiserver--5fcfc6547b--67zph-eth0" Mar 12 01:36:25.322430 containerd[1454]: 2026-03-12 01:36:25.293 [INFO][3864] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f1b8aeebbead1ba57c94a6c124e3eec467e0f5cde492582bce733c53c256b528" HandleID="k8s-pod-network.f1b8aeebbead1ba57c94a6c124e3eec467e0f5cde492582bce733c53c256b528" Workload="localhost-k8s-calico--apiserver--5fcfc6547b--67zph-eth0" Mar 12 01:36:25.322430 containerd[1454]: 2026-03-12 01:36:25.299 [INFO][3864] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:36:25.322430 containerd[1454]: 2026-03-12 01:36:25.303 [INFO][3804] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f1b8aeebbead1ba57c94a6c124e3eec467e0f5cde492582bce733c53c256b528" Mar 12 01:36:25.322430 containerd[1454]: time="2026-03-12T01:36:25.321822539Z" level=info msg="TearDown network for sandbox \"f1b8aeebbead1ba57c94a6c124e3eec467e0f5cde492582bce733c53c256b528\" successfully" Mar 12 01:36:25.322430 containerd[1454]: time="2026-03-12T01:36:25.321894263Z" level=info msg="StopPodSandbox for \"f1b8aeebbead1ba57c94a6c124e3eec467e0f5cde492582bce733c53c256b528\" returns successfully" Mar 12 01:36:25.333795 containerd[1454]: time="2026-03-12T01:36:25.333188419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58846f54f5-7rsmh,Uid:5999426b-6640-4a93-bb5d-5d94700e760d,Namespace:calico-system,Attempt:1,}" Mar 12 01:36:25.349327 containerd[1454]: time="2026-03-12T01:36:25.349170525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fcfc6547b-67zph,Uid:f00bf4e6-320f-408f-93a5-5bdfb046e6a2,Namespace:calico-system,Attempt:1,}" Mar 12 01:36:25.370740 containerd[1454]: 2026-03-12 01:36:25.170 [INFO][3770] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b3dfbc640779c49c5300f08225213822843925be5961af2d0163c0588b789640" Mar 12 01:36:25.370740 containerd[1454]: 2026-03-12 01:36:25.172 [INFO][3770] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b3dfbc640779c49c5300f08225213822843925be5961af2d0163c0588b789640" iface="eth0" netns="/var/run/netns/cni-5bea74f4-7102-29cd-7077-0efd251c4173" Mar 12 01:36:25.370740 containerd[1454]: 2026-03-12 01:36:25.173 [INFO][3770] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b3dfbc640779c49c5300f08225213822843925be5961af2d0163c0588b789640" iface="eth0" netns="/var/run/netns/cni-5bea74f4-7102-29cd-7077-0efd251c4173" Mar 12 01:36:25.370740 containerd[1454]: 2026-03-12 01:36:25.174 [INFO][3770] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b3dfbc640779c49c5300f08225213822843925be5961af2d0163c0588b789640" iface="eth0" netns="/var/run/netns/cni-5bea74f4-7102-29cd-7077-0efd251c4173" Mar 12 01:36:25.370740 containerd[1454]: 2026-03-12 01:36:25.175 [INFO][3770] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b3dfbc640779c49c5300f08225213822843925be5961af2d0163c0588b789640" Mar 12 01:36:25.370740 containerd[1454]: 2026-03-12 01:36:25.175 [INFO][3770] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b3dfbc640779c49c5300f08225213822843925be5961af2d0163c0588b789640" Mar 12 01:36:25.370740 containerd[1454]: 2026-03-12 01:36:25.269 [INFO][3877] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b3dfbc640779c49c5300f08225213822843925be5961af2d0163c0588b789640" HandleID="k8s-pod-network.b3dfbc640779c49c5300f08225213822843925be5961af2d0163c0588b789640" Workload="localhost-k8s-coredns--66bc5c9577--bc54q-eth0" Mar 12 01:36:25.370740 containerd[1454]: 2026-03-12 01:36:25.271 [INFO][3877] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:36:25.370740 containerd[1454]: 2026-03-12 01:36:25.298 [INFO][3877] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:36:25.370740 containerd[1454]: 2026-03-12 01:36:25.332 [WARNING][3877] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b3dfbc640779c49c5300f08225213822843925be5961af2d0163c0588b789640" HandleID="k8s-pod-network.b3dfbc640779c49c5300f08225213822843925be5961af2d0163c0588b789640" Workload="localhost-k8s-coredns--66bc5c9577--bc54q-eth0" Mar 12 01:36:25.370740 containerd[1454]: 2026-03-12 01:36:25.334 [INFO][3877] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b3dfbc640779c49c5300f08225213822843925be5961af2d0163c0588b789640" HandleID="k8s-pod-network.b3dfbc640779c49c5300f08225213822843925be5961af2d0163c0588b789640" Workload="localhost-k8s-coredns--66bc5c9577--bc54q-eth0" Mar 12 01:36:25.370740 containerd[1454]: 2026-03-12 01:36:25.347 [INFO][3877] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:36:25.370740 containerd[1454]: 2026-03-12 01:36:25.354 [INFO][3770] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b3dfbc640779c49c5300f08225213822843925be5961af2d0163c0588b789640" Mar 12 01:36:25.372006 containerd[1454]: time="2026-03-12T01:36:25.371738048Z" level=info msg="TearDown network for sandbox \"b3dfbc640779c49c5300f08225213822843925be5961af2d0163c0588b789640\" successfully" Mar 12 01:36:25.372006 containerd[1454]: time="2026-03-12T01:36:25.371793942Z" level=info msg="StopPodSandbox for \"b3dfbc640779c49c5300f08225213822843925be5961af2d0163c0588b789640\" returns successfully" Mar 12 01:36:25.380803 kubelet[2528]: E0312 01:36:25.380720 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:36:25.385468 containerd[1454]: time="2026-03-12T01:36:25.385320540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-bc54q,Uid:503f8cf5-b92a-411c-8353-481b71d6c97f,Namespace:kube-system,Attempt:1,}" Mar 12 01:36:25.393513 containerd[1454]: 2026-03-12 01:36:25.109 [INFO][3760] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fb1cd7440f10148d769f7cfb32edd3bae5a0dd31751fd65fcda0a737be129b62" Mar 12 01:36:25.393513 containerd[1454]: 2026-03-12 01:36:25.109 [INFO][3760] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fb1cd7440f10148d769f7cfb32edd3bae5a0dd31751fd65fcda0a737be129b62" iface="eth0" netns="/var/run/netns/cni-da5223ce-09f4-a1c7-cd6b-e0e449efa642" Mar 12 01:36:25.393513 containerd[1454]: 2026-03-12 01:36:25.111 [INFO][3760] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fb1cd7440f10148d769f7cfb32edd3bae5a0dd31751fd65fcda0a737be129b62" iface="eth0" netns="/var/run/netns/cni-da5223ce-09f4-a1c7-cd6b-e0e449efa642" Mar 12 01:36:25.393513 containerd[1454]: 2026-03-12 01:36:25.111 [INFO][3760] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fb1cd7440f10148d769f7cfb32edd3bae5a0dd31751fd65fcda0a737be129b62" iface="eth0" netns="/var/run/netns/cni-da5223ce-09f4-a1c7-cd6b-e0e449efa642" Mar 12 01:36:25.393513 containerd[1454]: 2026-03-12 01:36:25.111 [INFO][3760] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fb1cd7440f10148d769f7cfb32edd3bae5a0dd31751fd65fcda0a737be129b62" Mar 12 01:36:25.393513 containerd[1454]: 2026-03-12 01:36:25.111 [INFO][3760] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fb1cd7440f10148d769f7cfb32edd3bae5a0dd31751fd65fcda0a737be129b62" Mar 12 01:36:25.393513 containerd[1454]: 2026-03-12 01:36:25.274 [INFO][3855] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fb1cd7440f10148d769f7cfb32edd3bae5a0dd31751fd65fcda0a737be129b62" HandleID="k8s-pod-network.fb1cd7440f10148d769f7cfb32edd3bae5a0dd31751fd65fcda0a737be129b62" Workload="localhost-k8s-goldmane--cccfbd5cf--jqpbf-eth0" Mar 12 01:36:25.393513 containerd[1454]: 2026-03-12 01:36:25.274 [INFO][3855] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:36:25.393513 containerd[1454]: 2026-03-12 01:36:25.348 [INFO][3855] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:36:25.393513 containerd[1454]: 2026-03-12 01:36:25.368 [WARNING][3855] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fb1cd7440f10148d769f7cfb32edd3bae5a0dd31751fd65fcda0a737be129b62" HandleID="k8s-pod-network.fb1cd7440f10148d769f7cfb32edd3bae5a0dd31751fd65fcda0a737be129b62" Workload="localhost-k8s-goldmane--cccfbd5cf--jqpbf-eth0" Mar 12 01:36:25.393513 containerd[1454]: 2026-03-12 01:36:25.368 [INFO][3855] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fb1cd7440f10148d769f7cfb32edd3bae5a0dd31751fd65fcda0a737be129b62" HandleID="k8s-pod-network.fb1cd7440f10148d769f7cfb32edd3bae5a0dd31751fd65fcda0a737be129b62" Workload="localhost-k8s-goldmane--cccfbd5cf--jqpbf-eth0" Mar 12 01:36:25.393513 containerd[1454]: 2026-03-12 01:36:25.375 [INFO][3855] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:36:25.393513 containerd[1454]: 2026-03-12 01:36:25.384 [INFO][3760] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fb1cd7440f10148d769f7cfb32edd3bae5a0dd31751fd65fcda0a737be129b62" Mar 12 01:36:25.394191 containerd[1454]: time="2026-03-12T01:36:25.394069205Z" level=info msg="TearDown network for sandbox \"fb1cd7440f10148d769f7cfb32edd3bae5a0dd31751fd65fcda0a737be129b62\" successfully" Mar 12 01:36:25.394191 containerd[1454]: time="2026-03-12T01:36:25.394135329Z" level=info msg="StopPodSandbox for \"fb1cd7440f10148d769f7cfb32edd3bae5a0dd31751fd65fcda0a737be129b62\" returns successfully" Mar 12 01:36:25.408533 containerd[1454]: time="2026-03-12T01:36:25.405751553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-jqpbf,Uid:8fe93df9-5176-42c7-b8e3-6176eea7ca40,Namespace:calico-system,Attempt:1,}" Mar 12 01:36:25.409889 containerd[1454]: 2026-03-12 01:36:25.118 [INFO][3824] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0d44238109b6659d2dfeaefc535a9a18f3ecd064866ff982bce1a51eff8076a8" Mar 12 01:36:25.409889 containerd[1454]: 2026-03-12 01:36:25.119 [INFO][3824] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0d44238109b6659d2dfeaefc535a9a18f3ecd064866ff982bce1a51eff8076a8" iface="eth0" netns="/var/run/netns/cni-55577927-35b4-f6eb-b110-814131262066" Mar 12 01:36:25.409889 containerd[1454]: 2026-03-12 01:36:25.124 [INFO][3824] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0d44238109b6659d2dfeaefc535a9a18f3ecd064866ff982bce1a51eff8076a8" iface="eth0" netns="/var/run/netns/cni-55577927-35b4-f6eb-b110-814131262066" Mar 12 01:36:25.409889 containerd[1454]: 2026-03-12 01:36:25.126 [INFO][3824] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0d44238109b6659d2dfeaefc535a9a18f3ecd064866ff982bce1a51eff8076a8" iface="eth0" netns="/var/run/netns/cni-55577927-35b4-f6eb-b110-814131262066" Mar 12 01:36:25.409889 containerd[1454]: 2026-03-12 01:36:25.127 [INFO][3824] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0d44238109b6659d2dfeaefc535a9a18f3ecd064866ff982bce1a51eff8076a8" Mar 12 01:36:25.409889 containerd[1454]: 2026-03-12 01:36:25.128 [INFO][3824] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0d44238109b6659d2dfeaefc535a9a18f3ecd064866ff982bce1a51eff8076a8" Mar 12 01:36:25.409889 containerd[1454]: 2026-03-12 01:36:25.274 [INFO][3861] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0d44238109b6659d2dfeaefc535a9a18f3ecd064866ff982bce1a51eff8076a8" HandleID="k8s-pod-network.0d44238109b6659d2dfeaefc535a9a18f3ecd064866ff982bce1a51eff8076a8" Workload="localhost-k8s-csi--node--driver--fz78r-eth0" Mar 12 01:36:25.409889 containerd[1454]: 2026-03-12 01:36:25.274 [INFO][3861] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:36:25.409889 containerd[1454]: 2026-03-12 01:36:25.375 [INFO][3861] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:36:25.409889 containerd[1454]: 2026-03-12 01:36:25.385 [WARNING][3861] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0d44238109b6659d2dfeaefc535a9a18f3ecd064866ff982bce1a51eff8076a8" HandleID="k8s-pod-network.0d44238109b6659d2dfeaefc535a9a18f3ecd064866ff982bce1a51eff8076a8" Workload="localhost-k8s-csi--node--driver--fz78r-eth0" Mar 12 01:36:25.409889 containerd[1454]: 2026-03-12 01:36:25.385 [INFO][3861] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0d44238109b6659d2dfeaefc535a9a18f3ecd064866ff982bce1a51eff8076a8" HandleID="k8s-pod-network.0d44238109b6659d2dfeaefc535a9a18f3ecd064866ff982bce1a51eff8076a8" Workload="localhost-k8s-csi--node--driver--fz78r-eth0" Mar 12 01:36:25.409889 containerd[1454]: 2026-03-12 01:36:25.399 [INFO][3861] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:36:25.409889 containerd[1454]: 2026-03-12 01:36:25.405 [INFO][3824] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0d44238109b6659d2dfeaefc535a9a18f3ecd064866ff982bce1a51eff8076a8" Mar 12 01:36:25.411953 containerd[1454]: time="2026-03-12T01:36:25.410207594Z" level=info msg="TearDown network for sandbox \"0d44238109b6659d2dfeaefc535a9a18f3ecd064866ff982bce1a51eff8076a8\" successfully" Mar 12 01:36:25.411953 containerd[1454]: time="2026-03-12T01:36:25.410234113Z" level=info msg="StopPodSandbox for \"0d44238109b6659d2dfeaefc535a9a18f3ecd064866ff982bce1a51eff8076a8\" returns successfully" Mar 12 01:36:25.416334 containerd[1454]: time="2026-03-12T01:36:25.416033527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fz78r,Uid:e41b2407-7aa6-4ede-8904-d6670e550c53,Namespace:calico-system,Attempt:1,}" Mar 12 01:36:25.800923 systemd[1]: run-netns-cni\x2d55577927\x2d35b4\x2df6eb\x2db110\x2d814131262066.mount: Deactivated successfully. Mar 12 01:36:25.801422 systemd[1]: run-netns-cni\x2d5bea74f4\x2d7102\x2d29cd\x2d7077\x2d0efd251c4173.mount: Deactivated successfully. Mar 12 01:36:25.801542 systemd[1]: run-netns-cni\x2dda5223ce\x2d09f4\x2da1c7\x2dcd6b\x2de0e449efa642.mount: Deactivated successfully. Mar 12 01:36:25.917318 systemd-networkd[1389]: calid3607af6102: Link UP Mar 12 01:36:25.919891 systemd-networkd[1389]: calid3607af6102: Gained carrier Mar 12 01:36:25.955847 containerd[1454]: 2026-03-12 01:36:25.518 [ERROR][3897] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 01:36:25.955847 containerd[1454]: 2026-03-12 01:36:25.575 [INFO][3897] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--58846f54f5--7rsmh-eth0 whisker-58846f54f5- calico-system 5999426b-6640-4a93-bb5d-5d94700e760d 911 0 2026-03-12 01:36:08 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:58846f54f5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-58846f54f5-7rsmh eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calid3607af6102 [] [] }} ContainerID="592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f" Namespace="calico-system" Pod="whisker-58846f54f5-7rsmh" WorkloadEndpoint="localhost-k8s-whisker--58846f54f5--7rsmh-" Mar 12 01:36:25.955847 containerd[1454]: 2026-03-12 01:36:25.576 [INFO][3897] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f" Namespace="calico-system" Pod="whisker-58846f54f5-7rsmh" WorkloadEndpoint="localhost-k8s-whisker--58846f54f5--7rsmh-eth0" Mar 12 01:36:25.955847 containerd[1454]: 2026-03-12 01:36:25.709 [INFO][3986] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f" HandleID="k8s-pod-network.592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f" Workload="localhost-k8s-whisker--58846f54f5--7rsmh-eth0" Mar 12 01:36:25.955847 containerd[1454]: 2026-03-12 01:36:25.784 [INFO][3986] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f" HandleID="k8s-pod-network.592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f" Workload="localhost-k8s-whisker--58846f54f5--7rsmh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000503b90), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-58846f54f5-7rsmh", "timestamp":"2026-03-12 01:36:25.709943211 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000726000)} Mar 12 01:36:25.955847 containerd[1454]: 2026-03-12 01:36:25.785 [INFO][3986] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:36:25.955847 containerd[1454]: 2026-03-12 01:36:25.785 [INFO][3986] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:36:25.955847 containerd[1454]: 2026-03-12 01:36:25.785 [INFO][3986] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:36:25.955847 containerd[1454]: 2026-03-12 01:36:25.800 [INFO][3986] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f" host="localhost" Mar 12 01:36:25.955847 containerd[1454]: 2026-03-12 01:36:25.818 [INFO][3986] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:36:25.955847 containerd[1454]: 2026-03-12 01:36:25.829 [INFO][3986] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 12 01:36:25.955847 containerd[1454]: 2026-03-12 01:36:25.837 [INFO][3986] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:36:25.955847 containerd[1454]: 2026-03-12 01:36:25.843 [INFO][3986] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 12 01:36:25.955847 containerd[1454]: 2026-03-12 01:36:25.844 [INFO][3986] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f" host="localhost" Mar 12 01:36:25.955847 containerd[1454]: 2026-03-12 01:36:25.853 [INFO][3986] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f Mar 12 01:36:25.955847 containerd[1454]: 2026-03-12 01:36:25.874 [INFO][3986] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f" host="localhost" Mar 12 01:36:25.955847 containerd[1454]: 2026-03-12 01:36:25.885 [INFO][3986] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f" host="localhost" Mar 12 01:36:25.955847 containerd[1454]: 2026-03-12 01:36:25.885 [INFO][3986] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f" host="localhost" Mar 12 01:36:25.955847 containerd[1454]: 2026-03-12 01:36:25.885 [INFO][3986] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:36:25.955847 containerd[1454]: 2026-03-12 01:36:25.885 [INFO][3986] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f" HandleID="k8s-pod-network.592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f" Workload="localhost-k8s-whisker--58846f54f5--7rsmh-eth0" Mar 12 01:36:25.958830 containerd[1454]: 2026-03-12 01:36:25.892 [INFO][3897] cni-plugin/k8s.go 418: Populated endpoint ContainerID="592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f" Namespace="calico-system" Pod="whisker-58846f54f5-7rsmh" WorkloadEndpoint="localhost-k8s-whisker--58846f54f5--7rsmh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--58846f54f5--7rsmh-eth0", GenerateName:"whisker-58846f54f5-", Namespace:"calico-system", SelfLink:"", UID:"5999426b-6640-4a93-bb5d-5d94700e760d", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 36, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"58846f54f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-58846f54f5-7rsmh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid3607af6102", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:36:25.958830 containerd[1454]: 2026-03-12 01:36:25.892 [INFO][3897] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f" Namespace="calico-system" Pod="whisker-58846f54f5-7rsmh" WorkloadEndpoint="localhost-k8s-whisker--58846f54f5--7rsmh-eth0" Mar 12 01:36:25.958830 containerd[1454]: 2026-03-12 01:36:25.892 [INFO][3897] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid3607af6102 ContainerID="592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f" Namespace="calico-system" Pod="whisker-58846f54f5-7rsmh" WorkloadEndpoint="localhost-k8s-whisker--58846f54f5--7rsmh-eth0" Mar 12 01:36:25.958830 containerd[1454]: 2026-03-12 01:36:25.923 [INFO][3897] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f" Namespace="calico-system" Pod="whisker-58846f54f5-7rsmh" WorkloadEndpoint="localhost-k8s-whisker--58846f54f5--7rsmh-eth0" Mar 12 01:36:25.958830 containerd[1454]: 2026-03-12 01:36:25.924 [INFO][3897] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f" Namespace="calico-system" Pod="whisker-58846f54f5-7rsmh" WorkloadEndpoint="localhost-k8s-whisker--58846f54f5--7rsmh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--58846f54f5--7rsmh-eth0", GenerateName:"whisker-58846f54f5-", Namespace:"calico-system", SelfLink:"", UID:"5999426b-6640-4a93-bb5d-5d94700e760d", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 36, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"58846f54f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f", Pod:"whisker-58846f54f5-7rsmh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid3607af6102", MAC:"b6:72:dd:69:dc:f3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:36:25.958830 containerd[1454]: 2026-03-12 01:36:25.951 [INFO][3897] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f" Namespace="calico-system" Pod="whisker-58846f54f5-7rsmh" WorkloadEndpoint="localhost-k8s-whisker--58846f54f5--7rsmh-eth0" Mar 12 01:36:26.024756 containerd[1454]: time="2026-03-12T01:36:26.023918167Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:36:26.024756 containerd[1454]: time="2026-03-12T01:36:26.024025437Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:36:26.024756 containerd[1454]: time="2026-03-12T01:36:26.024052629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:36:26.024756 containerd[1454]: time="2026-03-12T01:36:26.024256278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:36:26.046832 systemd-networkd[1389]: cali3ae3384e84d: Link UP Mar 12 01:36:26.048675 systemd-networkd[1389]: cali3ae3384e84d: Gained carrier Mar 12 01:36:26.100577 systemd[1]: Started cri-containerd-592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f.scope - libcontainer container 592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f. Mar 12 01:36:26.117800 containerd[1454]: 2026-03-12 01:36:25.524 [ERROR][3908] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 01:36:26.117800 containerd[1454]: 2026-03-12 01:36:25.571 [INFO][3908] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5fcfc6547b--67zph-eth0 calico-apiserver-5fcfc6547b- calico-system f00bf4e6-320f-408f-93a5-5bdfb046e6a2 913 0 2026-03-12 01:36:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5fcfc6547b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5fcfc6547b-67zph eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali3ae3384e84d [] [] }} ContainerID="06d0310f452d54a9fdd1942c4cd9b0f904235f4a8be7077770ec7256a59cf346" Namespace="calico-system" Pod="calico-apiserver-5fcfc6547b-67zph" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fcfc6547b--67zph-" Mar 12 01:36:26.117800 containerd[1454]: 2026-03-12 01:36:25.571 [INFO][3908] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="06d0310f452d54a9fdd1942c4cd9b0f904235f4a8be7077770ec7256a59cf346" Namespace="calico-system" Pod="calico-apiserver-5fcfc6547b-67zph" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fcfc6547b--67zph-eth0" Mar 12 01:36:26.117800 containerd[1454]: 2026-03-12 01:36:25.707 [INFO][3977] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="06d0310f452d54a9fdd1942c4cd9b0f904235f4a8be7077770ec7256a59cf346" HandleID="k8s-pod-network.06d0310f452d54a9fdd1942c4cd9b0f904235f4a8be7077770ec7256a59cf346" Workload="localhost-k8s-calico--apiserver--5fcfc6547b--67zph-eth0" Mar 12 01:36:26.117800 containerd[1454]: 2026-03-12 01:36:25.789 [INFO][3977] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="06d0310f452d54a9fdd1942c4cd9b0f904235f4a8be7077770ec7256a59cf346" HandleID="k8s-pod-network.06d0310f452d54a9fdd1942c4cd9b0f904235f4a8be7077770ec7256a59cf346" Workload="localhost-k8s-calico--apiserver--5fcfc6547b--67zph-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003981b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-5fcfc6547b-67zph", "timestamp":"2026-03-12 01:36:25.707765478 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000652420)} Mar 12 01:36:26.117800 containerd[1454]: 2026-03-12 01:36:25.789 [INFO][3977] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:36:26.117800 containerd[1454]: 2026-03-12 01:36:25.887 [INFO][3977] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:36:26.117800 containerd[1454]: 2026-03-12 01:36:25.887 [INFO][3977] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:36:26.117800 containerd[1454]: 2026-03-12 01:36:25.898 [INFO][3977] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.06d0310f452d54a9fdd1942c4cd9b0f904235f4a8be7077770ec7256a59cf346" host="localhost" Mar 12 01:36:26.117800 containerd[1454]: 2026-03-12 01:36:25.922 [INFO][3977] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:36:26.117800 containerd[1454]: 2026-03-12 01:36:25.961 [INFO][3977] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 12 01:36:26.117800 containerd[1454]: 2026-03-12 01:36:25.972 [INFO][3977] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:36:26.117800 containerd[1454]: 2026-03-12 01:36:25.978 [INFO][3977] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 12 01:36:26.117800 containerd[1454]: 2026-03-12 01:36:25.980 [INFO][3977] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.06d0310f452d54a9fdd1942c4cd9b0f904235f4a8be7077770ec7256a59cf346" host="localhost" Mar 12 01:36:26.117800 containerd[1454]: 2026-03-12 01:36:25.984 [INFO][3977] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.06d0310f452d54a9fdd1942c4cd9b0f904235f4a8be7077770ec7256a59cf346 Mar 12 01:36:26.117800 containerd[1454]: 2026-03-12 01:36:26.006 [INFO][3977] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.06d0310f452d54a9fdd1942c4cd9b0f904235f4a8be7077770ec7256a59cf346" host="localhost" Mar 12 01:36:26.117800 containerd[1454]: 2026-03-12 01:36:26.019 [INFO][3977] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.06d0310f452d54a9fdd1942c4cd9b0f904235f4a8be7077770ec7256a59cf346" host="localhost" Mar 12 01:36:26.117800 containerd[1454]: 2026-03-12 01:36:26.020 [INFO][3977] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.06d0310f452d54a9fdd1942c4cd9b0f904235f4a8be7077770ec7256a59cf346" host="localhost" Mar 12 01:36:26.117800 containerd[1454]: 2026-03-12 01:36:26.023 [INFO][3977] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:36:26.117800 containerd[1454]: 2026-03-12 01:36:26.024 [INFO][3977] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="06d0310f452d54a9fdd1942c4cd9b0f904235f4a8be7077770ec7256a59cf346" HandleID="k8s-pod-network.06d0310f452d54a9fdd1942c4cd9b0f904235f4a8be7077770ec7256a59cf346" Workload="localhost-k8s-calico--apiserver--5fcfc6547b--67zph-eth0" Mar 12 01:36:26.148223 containerd[1454]: 2026-03-12 01:36:26.042 [INFO][3908] cni-plugin/k8s.go 418: Populated endpoint ContainerID="06d0310f452d54a9fdd1942c4cd9b0f904235f4a8be7077770ec7256a59cf346" Namespace="calico-system" Pod="calico-apiserver-5fcfc6547b-67zph" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fcfc6547b--67zph-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5fcfc6547b--67zph-eth0", GenerateName:"calico-apiserver-5fcfc6547b-", Namespace:"calico-system", SelfLink:"", UID:"f00bf4e6-320f-408f-93a5-5bdfb046e6a2", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 36, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fcfc6547b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5fcfc6547b-67zph", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali3ae3384e84d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:36:26.148223 containerd[1454]: 2026-03-12 01:36:26.042 [INFO][3908] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="06d0310f452d54a9fdd1942c4cd9b0f904235f4a8be7077770ec7256a59cf346" Namespace="calico-system" Pod="calico-apiserver-5fcfc6547b-67zph" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fcfc6547b--67zph-eth0" Mar 12 01:36:26.148223 containerd[1454]: 2026-03-12 01:36:26.042 [INFO][3908] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3ae3384e84d ContainerID="06d0310f452d54a9fdd1942c4cd9b0f904235f4a8be7077770ec7256a59cf346" Namespace="calico-system" Pod="calico-apiserver-5fcfc6547b-67zph" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fcfc6547b--67zph-eth0" Mar 12 01:36:26.148223 containerd[1454]: 2026-03-12 01:36:26.050 [INFO][3908] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="06d0310f452d54a9fdd1942c4cd9b0f904235f4a8be7077770ec7256a59cf346" Namespace="calico-system" Pod="calico-apiserver-5fcfc6547b-67zph" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fcfc6547b--67zph-eth0" Mar 12 01:36:26.148223 containerd[1454]: 2026-03-12 01:36:26.051 [INFO][3908] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="06d0310f452d54a9fdd1942c4cd9b0f904235f4a8be7077770ec7256a59cf346" Namespace="calico-system" Pod="calico-apiserver-5fcfc6547b-67zph" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fcfc6547b--67zph-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5fcfc6547b--67zph-eth0", GenerateName:"calico-apiserver-5fcfc6547b-", Namespace:"calico-system", SelfLink:"", UID:"f00bf4e6-320f-408f-93a5-5bdfb046e6a2", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 36, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fcfc6547b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"06d0310f452d54a9fdd1942c4cd9b0f904235f4a8be7077770ec7256a59cf346", Pod:"calico-apiserver-5fcfc6547b-67zph", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali3ae3384e84d", MAC:"f2:7b:95:22:88:1b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:36:26.148223 containerd[1454]: 2026-03-12 01:36:26.100 [INFO][3908] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="06d0310f452d54a9fdd1942c4cd9b0f904235f4a8be7077770ec7256a59cf346" Namespace="calico-system" Pod="calico-apiserver-5fcfc6547b-67zph" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fcfc6547b--67zph-eth0" Mar 12 01:36:26.346512 systemd-resolved[1390]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:36:26.370252 containerd[1454]: time="2026-03-12T01:36:26.369534926Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:36:26.370252 containerd[1454]: time="2026-03-12T01:36:26.369612911Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:36:26.370252 containerd[1454]: time="2026-03-12T01:36:26.369653748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:36:26.370252 containerd[1454]: time="2026-03-12T01:36:26.369815610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:36:26.407149 systemd-networkd[1389]: calibae3d8e948a: Link UP Mar 12 01:36:26.407729 systemd[1]: Started cri-containerd-06d0310f452d54a9fdd1942c4cd9b0f904235f4a8be7077770ec7256a59cf346.scope - libcontainer container 06d0310f452d54a9fdd1942c4cd9b0f904235f4a8be7077770ec7256a59cf346. Mar 12 01:36:26.412247 systemd-networkd[1389]: calibae3d8e948a: Gained carrier Mar 12 01:36:26.431209 containerd[1454]: time="2026-03-12T01:36:26.431122041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58846f54f5-7rsmh,Uid:5999426b-6640-4a93-bb5d-5d94700e760d,Namespace:calico-system,Attempt:1,} returns sandbox id \"592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f\"" Mar 12 01:36:26.434761 containerd[1454]: time="2026-03-12T01:36:26.434513847Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 12 01:36:26.454172 systemd-resolved[1390]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:36:26.472000 containerd[1454]: 2026-03-12 01:36:25.594 [ERROR][3937] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 01:36:26.472000 containerd[1454]: 2026-03-12 01:36:25.620 [INFO][3937] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--cccfbd5cf--jqpbf-eth0 goldmane-cccfbd5cf- calico-system 8fe93df9-5176-42c7-b8e3-6176eea7ca40 910 0 2026-03-12 01:36:00 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-cccfbd5cf-jqpbf eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calibae3d8e948a [] [] }} ContainerID="22e2b072737c162658b9f4cc1db134e195370c4046fe102fa3028d88f443cd57" Namespace="calico-system" Pod="goldmane-cccfbd5cf-jqpbf" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--jqpbf-" Mar 12 01:36:26.472000 containerd[1454]: 2026-03-12 01:36:25.620 [INFO][3937] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="22e2b072737c162658b9f4cc1db134e195370c4046fe102fa3028d88f443cd57" Namespace="calico-system" Pod="goldmane-cccfbd5cf-jqpbf" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--jqpbf-eth0" Mar 12 01:36:26.472000 containerd[1454]: 2026-03-12 01:36:25.773 [INFO][3998] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="22e2b072737c162658b9f4cc1db134e195370c4046fe102fa3028d88f443cd57" HandleID="k8s-pod-network.22e2b072737c162658b9f4cc1db134e195370c4046fe102fa3028d88f443cd57" Workload="localhost-k8s-goldmane--cccfbd5cf--jqpbf-eth0" Mar 12 01:36:26.472000 containerd[1454]: 2026-03-12 01:36:25.808 [INFO][3998] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="22e2b072737c162658b9f4cc1db134e195370c4046fe102fa3028d88f443cd57" HandleID="k8s-pod-network.22e2b072737c162658b9f4cc1db134e195370c4046fe102fa3028d88f443cd57" Workload="localhost-k8s-goldmane--cccfbd5cf--jqpbf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ea00), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-cccfbd5cf-jqpbf", "timestamp":"2026-03-12 01:36:25.773497184 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004d2dc0)} Mar 12 01:36:26.472000 containerd[1454]: 2026-03-12 01:36:25.808 [INFO][3998] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:36:26.472000 containerd[1454]: 2026-03-12 01:36:26.020 [INFO][3998] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:36:26.472000 containerd[1454]: 2026-03-12 01:36:26.020 [INFO][3998] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:36:26.472000 containerd[1454]: 2026-03-12 01:36:26.036 [INFO][3998] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.22e2b072737c162658b9f4cc1db134e195370c4046fe102fa3028d88f443cd57" host="localhost" Mar 12 01:36:26.472000 containerd[1454]: 2026-03-12 01:36:26.049 [INFO][3998] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:36:26.472000 containerd[1454]: 2026-03-12 01:36:26.102 [INFO][3998] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 12 01:36:26.472000 containerd[1454]: 2026-03-12 01:36:26.200 [INFO][3998] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:36:26.472000 containerd[1454]: 2026-03-12 01:36:26.340 [INFO][3998] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 12 01:36:26.472000 containerd[1454]: 2026-03-12 01:36:26.344 [INFO][3998] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.22e2b072737c162658b9f4cc1db134e195370c4046fe102fa3028d88f443cd57" host="localhost" Mar 12 01:36:26.472000 containerd[1454]: 2026-03-12 01:36:26.353 [INFO][3998] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.22e2b072737c162658b9f4cc1db134e195370c4046fe102fa3028d88f443cd57 Mar 12 01:36:26.472000 containerd[1454]: 2026-03-12 01:36:26.363 [INFO][3998] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.22e2b072737c162658b9f4cc1db134e195370c4046fe102fa3028d88f443cd57" host="localhost" Mar 12 01:36:26.472000 containerd[1454]: 2026-03-12 01:36:26.383 [INFO][3998] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.22e2b072737c162658b9f4cc1db134e195370c4046fe102fa3028d88f443cd57" host="localhost" Mar 12 01:36:26.472000 containerd[1454]: 2026-03-12 01:36:26.383 [INFO][3998] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.22e2b072737c162658b9f4cc1db134e195370c4046fe102fa3028d88f443cd57" host="localhost" Mar 12 01:36:26.472000 containerd[1454]: 2026-03-12 01:36:26.383 [INFO][3998] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:36:26.472000 containerd[1454]: 2026-03-12 01:36:26.383 [INFO][3998] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="22e2b072737c162658b9f4cc1db134e195370c4046fe102fa3028d88f443cd57" HandleID="k8s-pod-network.22e2b072737c162658b9f4cc1db134e195370c4046fe102fa3028d88f443cd57" Workload="localhost-k8s-goldmane--cccfbd5cf--jqpbf-eth0" Mar 12 01:36:26.472732 containerd[1454]: 2026-03-12 01:36:26.388 [INFO][3937] cni-plugin/k8s.go 418: Populated endpoint ContainerID="22e2b072737c162658b9f4cc1db134e195370c4046fe102fa3028d88f443cd57" Namespace="calico-system" Pod="goldmane-cccfbd5cf-jqpbf" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--jqpbf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--cccfbd5cf--jqpbf-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"8fe93df9-5176-42c7-b8e3-6176eea7ca40", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 36, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-cccfbd5cf-jqpbf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibae3d8e948a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:36:26.472732 containerd[1454]: 2026-03-12 01:36:26.388 [INFO][3937] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="22e2b072737c162658b9f4cc1db134e195370c4046fe102fa3028d88f443cd57" Namespace="calico-system" Pod="goldmane-cccfbd5cf-jqpbf" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--jqpbf-eth0" Mar 12 01:36:26.472732 containerd[1454]: 2026-03-12 01:36:26.388 [INFO][3937] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibae3d8e948a ContainerID="22e2b072737c162658b9f4cc1db134e195370c4046fe102fa3028d88f443cd57" Namespace="calico-system" Pod="goldmane-cccfbd5cf-jqpbf" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--jqpbf-eth0" Mar 12 01:36:26.472732 containerd[1454]: 2026-03-12 01:36:26.422 [INFO][3937] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="22e2b072737c162658b9f4cc1db134e195370c4046fe102fa3028d88f443cd57" Namespace="calico-system" Pod="goldmane-cccfbd5cf-jqpbf" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--jqpbf-eth0" Mar 12 01:36:26.472732 containerd[1454]: 2026-03-12 01:36:26.426 [INFO][3937] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="22e2b072737c162658b9f4cc1db134e195370c4046fe102fa3028d88f443cd57" Namespace="calico-system" Pod="goldmane-cccfbd5cf-jqpbf" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--jqpbf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--cccfbd5cf--jqpbf-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"8fe93df9-5176-42c7-b8e3-6176eea7ca40", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 36, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"22e2b072737c162658b9f4cc1db134e195370c4046fe102fa3028d88f443cd57", Pod:"goldmane-cccfbd5cf-jqpbf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibae3d8e948a", MAC:"d6:87:6c:9c:07:ca", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:36:26.472732 containerd[1454]: 2026-03-12 01:36:26.455 [INFO][3937] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="22e2b072737c162658b9f4cc1db134e195370c4046fe102fa3028d88f443cd57" Namespace="calico-system" Pod="goldmane-cccfbd5cf-jqpbf" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--jqpbf-eth0" Mar 12 01:36:26.519770 systemd-networkd[1389]: cali9ccbd44d559: Link UP Mar 12 01:36:26.584401 systemd-networkd[1389]: cali9ccbd44d559: Gained carrier Mar 12 01:36:26.585962 containerd[1454]: time="2026-03-12T01:36:26.583747969Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:36:26.585962 containerd[1454]: time="2026-03-12T01:36:26.585633539Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:36:26.585962 containerd[1454]: time="2026-03-12T01:36:26.585764864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:36:26.586670 containerd[1454]: time="2026-03-12T01:36:26.586544800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:36:26.844317 containerd[1454]: time="2026-03-12T01:36:26.844103176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fcfc6547b-67zph,Uid:f00bf4e6-320f-408f-93a5-5bdfb046e6a2,Namespace:calico-system,Attempt:1,} returns sandbox id \"06d0310f452d54a9fdd1942c4cd9b0f904235f4a8be7077770ec7256a59cf346\"" Mar 12 01:36:26.844846 containerd[1454]: 2026-03-12 01:36:25.590 [ERROR][3950] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 01:36:26.844846 containerd[1454]: 2026-03-12 01:36:25.621 [INFO][3950] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--fz78r-eth0 csi-node-driver- calico-system e41b2407-7aa6-4ede-8904-d6670e550c53 912 0 2026-03-12 01:36:01 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-fz78r eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali9ccbd44d559 [] [] }} ContainerID="530f97b39787a0f72b711287f3ccfcf4d2c21bec69731ed9472b93d6cd8a35a0" Namespace="calico-system" Pod="csi-node-driver-fz78r" WorkloadEndpoint="localhost-k8s-csi--node--driver--fz78r-" Mar 12 01:36:26.844846 containerd[1454]: 2026-03-12 01:36:25.621 [INFO][3950] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="530f97b39787a0f72b711287f3ccfcf4d2c21bec69731ed9472b93d6cd8a35a0" Namespace="calico-system" Pod="csi-node-driver-fz78r" WorkloadEndpoint="localhost-k8s-csi--node--driver--fz78r-eth0" Mar 12 01:36:26.844846 containerd[1454]: 2026-03-12 01:36:25.789 [INFO][3996] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="530f97b39787a0f72b711287f3ccfcf4d2c21bec69731ed9472b93d6cd8a35a0" HandleID="k8s-pod-network.530f97b39787a0f72b711287f3ccfcf4d2c21bec69731ed9472b93d6cd8a35a0" Workload="localhost-k8s-csi--node--driver--fz78r-eth0" Mar 12 01:36:26.844846 containerd[1454]: 2026-03-12 01:36:25.816 [INFO][3996] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="530f97b39787a0f72b711287f3ccfcf4d2c21bec69731ed9472b93d6cd8a35a0" HandleID="k8s-pod-network.530f97b39787a0f72b711287f3ccfcf4d2c21bec69731ed9472b93d6cd8a35a0" Workload="localhost-k8s-csi--node--driver--fz78r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fba0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-fz78r", "timestamp":"2026-03-12 01:36:25.789813632 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00011dce0)} Mar 12 01:36:26.844846 containerd[1454]: 2026-03-12 01:36:25.817 [INFO][3996] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:36:26.844846 containerd[1454]: 2026-03-12 01:36:26.383 [INFO][3996] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:36:26.844846 containerd[1454]: 2026-03-12 01:36:26.383 [INFO][3996] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:36:26.844846 containerd[1454]: 2026-03-12 01:36:26.391 [INFO][3996] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.530f97b39787a0f72b711287f3ccfcf4d2c21bec69731ed9472b93d6cd8a35a0" host="localhost" Mar 12 01:36:26.844846 containerd[1454]: 2026-03-12 01:36:26.409 [INFO][3996] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:36:26.844846 containerd[1454]: 2026-03-12 01:36:26.425 [INFO][3996] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 12 01:36:26.844846 containerd[1454]: 2026-03-12 01:36:26.441 [INFO][3996] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:36:26.844846 containerd[1454]: 2026-03-12 01:36:26.455 [INFO][3996] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 12 01:36:26.844846 containerd[1454]: 2026-03-12 01:36:26.455 [INFO][3996] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.530f97b39787a0f72b711287f3ccfcf4d2c21bec69731ed9472b93d6cd8a35a0" host="localhost" Mar 12 01:36:26.844846 containerd[1454]: 2026-03-12 01:36:26.473 [INFO][3996] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.530f97b39787a0f72b711287f3ccfcf4d2c21bec69731ed9472b93d6cd8a35a0 Mar 12 01:36:26.844846 containerd[1454]: 2026-03-12 01:36:26.487 [INFO][3996] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.530f97b39787a0f72b711287f3ccfcf4d2c21bec69731ed9472b93d6cd8a35a0" host="localhost" Mar 12 01:36:26.844846 containerd[1454]: 2026-03-12 01:36:26.504 [INFO][3996] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.530f97b39787a0f72b711287f3ccfcf4d2c21bec69731ed9472b93d6cd8a35a0" host="localhost" Mar 12 01:36:26.844846 containerd[1454]: 2026-03-12 01:36:26.504 [INFO][3996] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.530f97b39787a0f72b711287f3ccfcf4d2c21bec69731ed9472b93d6cd8a35a0" host="localhost" Mar 12 01:36:26.844846 containerd[1454]: 2026-03-12 01:36:26.504 [INFO][3996] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:36:26.844846 containerd[1454]: 2026-03-12 01:36:26.505 [INFO][3996] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="530f97b39787a0f72b711287f3ccfcf4d2c21bec69731ed9472b93d6cd8a35a0" HandleID="k8s-pod-network.530f97b39787a0f72b711287f3ccfcf4d2c21bec69731ed9472b93d6cd8a35a0" Workload="localhost-k8s-csi--node--driver--fz78r-eth0" Mar 12 01:36:26.846489 containerd[1454]: 2026-03-12 01:36:26.509 [INFO][3950] cni-plugin/k8s.go 418: Populated endpoint ContainerID="530f97b39787a0f72b711287f3ccfcf4d2c21bec69731ed9472b93d6cd8a35a0" Namespace="calico-system" Pod="csi-node-driver-fz78r" WorkloadEndpoint="localhost-k8s-csi--node--driver--fz78r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fz78r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e41b2407-7aa6-4ede-8904-d6670e550c53", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 36, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-fz78r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9ccbd44d559", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:36:26.846489 containerd[1454]: 2026-03-12 01:36:26.509 [INFO][3950] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="530f97b39787a0f72b711287f3ccfcf4d2c21bec69731ed9472b93d6cd8a35a0" Namespace="calico-system" Pod="csi-node-driver-fz78r" WorkloadEndpoint="localhost-k8s-csi--node--driver--fz78r-eth0" Mar 12 01:36:26.846489 containerd[1454]: 2026-03-12 01:36:26.509 [INFO][3950] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9ccbd44d559 ContainerID="530f97b39787a0f72b711287f3ccfcf4d2c21bec69731ed9472b93d6cd8a35a0" Namespace="calico-system" Pod="csi-node-driver-fz78r" WorkloadEndpoint="localhost-k8s-csi--node--driver--fz78r-eth0" Mar 12 01:36:26.846489 containerd[1454]: 2026-03-12 01:36:26.613 [INFO][3950] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="530f97b39787a0f72b711287f3ccfcf4d2c21bec69731ed9472b93d6cd8a35a0" Namespace="calico-system" Pod="csi-node-driver-fz78r" WorkloadEndpoint="localhost-k8s-csi--node--driver--fz78r-eth0" Mar 12 01:36:26.846489 containerd[1454]: 2026-03-12 01:36:26.750 [INFO][3950] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="530f97b39787a0f72b711287f3ccfcf4d2c21bec69731ed9472b93d6cd8a35a0" Namespace="calico-system" Pod="csi-node-driver-fz78r" WorkloadEndpoint="localhost-k8s-csi--node--driver--fz78r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fz78r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e41b2407-7aa6-4ede-8904-d6670e550c53", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 36, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"530f97b39787a0f72b711287f3ccfcf4d2c21bec69731ed9472b93d6cd8a35a0", Pod:"csi-node-driver-fz78r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9ccbd44d559", MAC:"ce:11:58:05:db:1e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:36:26.846489 containerd[1454]: 2026-03-12 01:36:26.820 [INFO][3950] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="530f97b39787a0f72b711287f3ccfcf4d2c21bec69731ed9472b93d6cd8a35a0" Namespace="calico-system" Pod="csi-node-driver-fz78r" WorkloadEndpoint="localhost-k8s-csi--node--driver--fz78r-eth0" Mar 12 01:36:26.855547 systemd[1]: Started cri-containerd-22e2b072737c162658b9f4cc1db134e195370c4046fe102fa3028d88f443cd57.scope - libcontainer container 22e2b072737c162658b9f4cc1db134e195370c4046fe102fa3028d88f443cd57. Mar 12 01:36:26.899515 systemd[1]: run-containerd-runc-k8s.io-22e2b072737c162658b9f4cc1db134e195370c4046fe102fa3028d88f443cd57-runc.OaOr18.mount: Deactivated successfully. Mar 12 01:36:26.968885 containerd[1454]: time="2026-03-12T01:36:26.968714511Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:36:26.968885 containerd[1454]: time="2026-03-12T01:36:26.968789371Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:36:26.968885 containerd[1454]: time="2026-03-12T01:36:26.968816692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:36:26.969668 containerd[1454]: time="2026-03-12T01:36:26.968928140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:36:27.005510 systemd-resolved[1390]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:36:27.030561 systemd[1]: Started cri-containerd-530f97b39787a0f72b711287f3ccfcf4d2c21bec69731ed9472b93d6cd8a35a0.scope - libcontainer container 530f97b39787a0f72b711287f3ccfcf4d2c21bec69731ed9472b93d6cd8a35a0. Mar 12 01:36:27.093991 systemd-resolved[1390]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:36:27.096064 systemd-networkd[1389]: cali1fdfb5df2e4: Link UP Mar 12 01:36:27.100436 systemd-networkd[1389]: cali1fdfb5df2e4: Gained carrier Mar 12 01:36:27.153902 containerd[1454]: time="2026-03-12T01:36:27.153461213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fz78r,Uid:e41b2407-7aa6-4ede-8904-d6670e550c53,Namespace:calico-system,Attempt:1,} returns sandbox id \"530f97b39787a0f72b711287f3ccfcf4d2c21bec69731ed9472b93d6cd8a35a0\"" Mar 12 01:36:27.157878 containerd[1454]: time="2026-03-12T01:36:27.156086160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-jqpbf,Uid:8fe93df9-5176-42c7-b8e3-6176eea7ca40,Namespace:calico-system,Attempt:1,} returns sandbox id \"22e2b072737c162658b9f4cc1db134e195370c4046fe102fa3028d88f443cd57\"" Mar 12 01:36:27.191403 systemd-networkd[1389]: calid3607af6102: Gained IPv6LL Mar 12 01:36:27.210445 containerd[1454]: 2026-03-12 01:36:25.623 [ERROR][3922] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 01:36:27.210445 containerd[1454]: 2026-03-12 01:36:25.668 [INFO][3922] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--bc54q-eth0 coredns-66bc5c9577- kube-system 503f8cf5-b92a-411c-8353-481b71d6c97f 914 0 2026-03-12 01:35:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-bc54q eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1fdfb5df2e4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="5c667132012003111e58065fe5bd1d8f5a7e45cda277951a6bcb8d0fe4ebc79e" Namespace="kube-system" Pod="coredns-66bc5c9577-bc54q" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--bc54q-" Mar 12 01:36:27.210445 containerd[1454]: 2026-03-12 01:36:25.668 [INFO][3922] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5c667132012003111e58065fe5bd1d8f5a7e45cda277951a6bcb8d0fe4ebc79e" Namespace="kube-system" Pod="coredns-66bc5c9577-bc54q" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--bc54q-eth0" Mar 12 01:36:27.210445 containerd[1454]: 2026-03-12 01:36:25.840 [INFO][4015] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5c667132012003111e58065fe5bd1d8f5a7e45cda277951a6bcb8d0fe4ebc79e" HandleID="k8s-pod-network.5c667132012003111e58065fe5bd1d8f5a7e45cda277951a6bcb8d0fe4ebc79e" Workload="localhost-k8s-coredns--66bc5c9577--bc54q-eth0" Mar 12 01:36:27.210445 containerd[1454]: 2026-03-12 01:36:25.853 [INFO][4015] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5c667132012003111e58065fe5bd1d8f5a7e45cda277951a6bcb8d0fe4ebc79e" HandleID="k8s-pod-network.5c667132012003111e58065fe5bd1d8f5a7e45cda277951a6bcb8d0fe4ebc79e" Workload="localhost-k8s-coredns--66bc5c9577--bc54q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003665d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-bc54q", "timestamp":"2026-03-12 01:36:25.840796304 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000708420)} Mar 12 01:36:27.210445 containerd[1454]: 2026-03-12 01:36:25.853 [INFO][4015] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:36:27.210445 containerd[1454]: 2026-03-12 01:36:26.504 [INFO][4015] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:36:27.210445 containerd[1454]: 2026-03-12 01:36:26.505 [INFO][4015] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:36:27.210445 containerd[1454]: 2026-03-12 01:36:26.517 [INFO][4015] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5c667132012003111e58065fe5bd1d8f5a7e45cda277951a6bcb8d0fe4ebc79e" host="localhost" Mar 12 01:36:27.210445 containerd[1454]: 2026-03-12 01:36:26.810 [INFO][4015] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:36:27.210445 containerd[1454]: 2026-03-12 01:36:26.890 [INFO][4015] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 12 01:36:27.210445 containerd[1454]: 2026-03-12 01:36:26.917 [INFO][4015] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:36:27.210445 containerd[1454]: 2026-03-12 01:36:26.940 [INFO][4015] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 12 01:36:27.210445 containerd[1454]: 2026-03-12 01:36:26.941 [INFO][4015] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5c667132012003111e58065fe5bd1d8f5a7e45cda277951a6bcb8d0fe4ebc79e" host="localhost" Mar 12 01:36:27.210445 containerd[1454]: 2026-03-12 01:36:26.948 [INFO][4015] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5c667132012003111e58065fe5bd1d8f5a7e45cda277951a6bcb8d0fe4ebc79e Mar 12 01:36:27.210445 containerd[1454]: 2026-03-12 01:36:26.985 [INFO][4015] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5c667132012003111e58065fe5bd1d8f5a7e45cda277951a6bcb8d0fe4ebc79e" host="localhost" Mar 12 01:36:27.210445 containerd[1454]: 2026-03-12 01:36:27.053 [INFO][4015] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.5c667132012003111e58065fe5bd1d8f5a7e45cda277951a6bcb8d0fe4ebc79e" host="localhost" Mar 12 01:36:27.210445 containerd[1454]: 2026-03-12 01:36:27.054 [INFO][4015] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.5c667132012003111e58065fe5bd1d8f5a7e45cda277951a6bcb8d0fe4ebc79e" host="localhost" Mar 12 01:36:27.210445 containerd[1454]: 2026-03-12 01:36:27.054 [INFO][4015] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:36:27.210445 containerd[1454]: 2026-03-12 01:36:27.054 [INFO][4015] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="5c667132012003111e58065fe5bd1d8f5a7e45cda277951a6bcb8d0fe4ebc79e" HandleID="k8s-pod-network.5c667132012003111e58065fe5bd1d8f5a7e45cda277951a6bcb8d0fe4ebc79e" Workload="localhost-k8s-coredns--66bc5c9577--bc54q-eth0" Mar 12 01:36:27.211511 containerd[1454]: 2026-03-12 01:36:27.087 [INFO][3922] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5c667132012003111e58065fe5bd1d8f5a7e45cda277951a6bcb8d0fe4ebc79e" Namespace="kube-system" Pod="coredns-66bc5c9577-bc54q" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--bc54q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--bc54q-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"503f8cf5-b92a-411c-8353-481b71d6c97f", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 35, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-bc54q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1fdfb5df2e4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:36:27.211511 containerd[1454]: 2026-03-12 01:36:27.087 [INFO][3922] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="5c667132012003111e58065fe5bd1d8f5a7e45cda277951a6bcb8d0fe4ebc79e" Namespace="kube-system" Pod="coredns-66bc5c9577-bc54q" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--bc54q-eth0" Mar 12 01:36:27.211511 containerd[1454]: 2026-03-12 01:36:27.088 [INFO][3922] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1fdfb5df2e4 ContainerID="5c667132012003111e58065fe5bd1d8f5a7e45cda277951a6bcb8d0fe4ebc79e" Namespace="kube-system" Pod="coredns-66bc5c9577-bc54q" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--bc54q-eth0" Mar 12 01:36:27.211511 containerd[1454]: 2026-03-12 01:36:27.097 [INFO][3922] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5c667132012003111e58065fe5bd1d8f5a7e45cda277951a6bcb8d0fe4ebc79e" Namespace="kube-system" Pod="coredns-66bc5c9577-bc54q" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--bc54q-eth0" Mar 12 01:36:27.211511 containerd[1454]: 2026-03-12 01:36:27.101 [INFO][3922] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5c667132012003111e58065fe5bd1d8f5a7e45cda277951a6bcb8d0fe4ebc79e" Namespace="kube-system" Pod="coredns-66bc5c9577-bc54q" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--bc54q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--bc54q-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"503f8cf5-b92a-411c-8353-481b71d6c97f", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 35, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5c667132012003111e58065fe5bd1d8f5a7e45cda277951a6bcb8d0fe4ebc79e", Pod:"coredns-66bc5c9577-bc54q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1fdfb5df2e4", MAC:"86:9f:4d:ea:84:a2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:36:27.211511 containerd[1454]: 2026-03-12 01:36:27.175 [INFO][3922] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5c667132012003111e58065fe5bd1d8f5a7e45cda277951a6bcb8d0fe4ebc79e" Namespace="kube-system" Pod="coredns-66bc5c9577-bc54q" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--bc54q-eth0" Mar 12 01:36:27.274751 containerd[1454]: time="2026-03-12T01:36:27.274570260Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:36:27.275410 containerd[1454]: time="2026-03-12T01:36:27.275350937Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:36:27.275660 containerd[1454]: time="2026-03-12T01:36:27.275502540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:36:27.276508 containerd[1454]: time="2026-03-12T01:36:27.276202979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:36:27.319600 systemd-networkd[1389]: cali3ae3384e84d: Gained IPv6LL Mar 12 01:36:27.328695 systemd[1]: Started cri-containerd-5c667132012003111e58065fe5bd1d8f5a7e45cda277951a6bcb8d0fe4ebc79e.scope - libcontainer container 5c667132012003111e58065fe5bd1d8f5a7e45cda277951a6bcb8d0fe4ebc79e. Mar 12 01:36:27.352002 systemd-resolved[1390]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:36:27.449677 containerd[1454]: time="2026-03-12T01:36:27.449348322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-bc54q,Uid:503f8cf5-b92a-411c-8353-481b71d6c97f,Namespace:kube-system,Attempt:1,} returns sandbox id \"5c667132012003111e58065fe5bd1d8f5a7e45cda277951a6bcb8d0fe4ebc79e\"" Mar 12 01:36:27.459976 kubelet[2528]: E0312 01:36:27.456690 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:36:27.490392 containerd[1454]: time="2026-03-12T01:36:27.490010055Z" level=info msg="CreateContainer within sandbox \"5c667132012003111e58065fe5bd1d8f5a7e45cda277951a6bcb8d0fe4ebc79e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 12 01:36:27.550257 containerd[1454]: time="2026-03-12T01:36:27.550186032Z" level=info msg="CreateContainer within sandbox \"5c667132012003111e58065fe5bd1d8f5a7e45cda277951a6bcb8d0fe4ebc79e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bdf44ee812f8a73d09992e9b9681a90d0c7f2de8e0097d66cb05cd2688c9b127\"" Mar 12 01:36:27.552091 containerd[1454]: time="2026-03-12T01:36:27.551834693Z" level=info msg="StartContainer for \"bdf44ee812f8a73d09992e9b9681a90d0c7f2de8e0097d66cb05cd2688c9b127\"" Mar 12 01:36:27.607530 systemd[1]: Started cri-containerd-bdf44ee812f8a73d09992e9b9681a90d0c7f2de8e0097d66cb05cd2688c9b127.scope - libcontainer container bdf44ee812f8a73d09992e9b9681a90d0c7f2de8e0097d66cb05cd2688c9b127. Mar 12 01:36:27.666465 kernel: calico-node[4207]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 12 01:36:27.715967 containerd[1454]: time="2026-03-12T01:36:27.715924658Z" level=info msg="StartContainer for \"bdf44ee812f8a73d09992e9b9681a90d0c7f2de8e0097d66cb05cd2688c9b127\" returns successfully" Mar 12 01:36:27.733833 containerd[1454]: time="2026-03-12T01:36:27.733741138Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:36:27.737423 containerd[1454]: time="2026-03-12T01:36:27.737233958Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 12 01:36:27.742535 containerd[1454]: time="2026-03-12T01:36:27.742209298Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:36:27.749579 containerd[1454]: time="2026-03-12T01:36:27.749414741Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:36:27.751627 containerd[1454]: time="2026-03-12T01:36:27.751587987Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.317032091s" Mar 12 01:36:27.751976 containerd[1454]: time="2026-03-12T01:36:27.751949773Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 12 01:36:27.755547 containerd[1454]: time="2026-03-12T01:36:27.755402637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 12 01:36:27.770770 containerd[1454]: time="2026-03-12T01:36:27.770691324Z" level=info msg="CreateContainer within sandbox \"592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 12 01:36:27.810835 containerd[1454]: time="2026-03-12T01:36:27.809936916Z" level=info msg="CreateContainer within sandbox \"592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"c1077a845444c615464fe32ab04d96ab28bde09b6853c9dea1bf7f67f317e766\"" Mar 12 01:36:27.811660 containerd[1454]: time="2026-03-12T01:36:27.811609920Z" level=info msg="StartContainer for \"c1077a845444c615464fe32ab04d96ab28bde09b6853c9dea1bf7f67f317e766\"" Mar 12 01:36:27.911664 systemd[1]: Started cri-containerd-c1077a845444c615464fe32ab04d96ab28bde09b6853c9dea1bf7f67f317e766.scope - libcontainer container c1077a845444c615464fe32ab04d96ab28bde09b6853c9dea1bf7f67f317e766. Mar 12 01:36:27.920804 kubelet[2528]: E0312 01:36:27.920706 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:36:28.070021 containerd[1454]: time="2026-03-12T01:36:28.069909563Z" level=info msg="StartContainer for \"c1077a845444c615464fe32ab04d96ab28bde09b6853c9dea1bf7f67f317e766\" returns successfully" Mar 12 01:36:28.151832 systemd-networkd[1389]: calibae3d8e948a: Gained IPv6LL Mar 12 01:36:28.153518 systemd-networkd[1389]: cali9ccbd44d559: Gained IPv6LL Mar 12 01:36:28.551169 systemd-networkd[1389]: vxlan.calico: Link UP Mar 12 01:36:28.551187 systemd-networkd[1389]: vxlan.calico: Gained carrier Mar 12 01:36:28.667520 systemd-networkd[1389]: cali1fdfb5df2e4: Gained IPv6LL Mar 12 01:36:28.936501 kubelet[2528]: E0312 01:36:28.936044 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:36:28.961477 kubelet[2528]: I0312 01:36:28.961347 2528 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-bc54q" podStartSLOduration=42.961324269 podStartE2EDuration="42.961324269s" podCreationTimestamp="2026-03-12 01:35:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:36:27.952611096 +0000 UTC m=+46.272456896" watchObservedRunningTime="2026-03-12 01:36:28.961324269 +0000 UTC m=+47.281170050" Mar 12 01:36:29.680606 containerd[1454]: time="2026-03-12T01:36:29.679406582Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:36:29.680606 containerd[1454]: time="2026-03-12T01:36:29.680421678Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 12 01:36:29.682541 containerd[1454]: time="2026-03-12T01:36:29.682471644Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:36:29.685601 containerd[1454]: time="2026-03-12T01:36:29.685537158Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:36:29.687491 containerd[1454]: time="2026-03-12T01:36:29.686099959Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 1.930652217s" Mar 12 01:36:29.687491 containerd[1454]: time="2026-03-12T01:36:29.686166162Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 12 01:36:29.688520 containerd[1454]: time="2026-03-12T01:36:29.688365270Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 12 01:36:29.694330 containerd[1454]: time="2026-03-12T01:36:29.694163415Z" level=info msg="CreateContainer within sandbox \"06d0310f452d54a9fdd1942c4cd9b0f904235f4a8be7077770ec7256a59cf346\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 12 01:36:29.725970 containerd[1454]: time="2026-03-12T01:36:29.725805713Z" level=info msg="CreateContainer within sandbox \"06d0310f452d54a9fdd1942c4cd9b0f904235f4a8be7077770ec7256a59cf346\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"21c62481816242204e222acae4f036e5003bc5ba7ce6fa69428edfced690572c\"" Mar 12 01:36:29.727064 containerd[1454]: time="2026-03-12T01:36:29.727034516Z" level=info msg="StartContainer for \"21c62481816242204e222acae4f036e5003bc5ba7ce6fa69428edfced690572c\"" Mar 12 01:36:29.775754 systemd[1]: Started cri-containerd-21c62481816242204e222acae4f036e5003bc5ba7ce6fa69428edfced690572c.scope - libcontainer container 21c62481816242204e222acae4f036e5003bc5ba7ce6fa69428edfced690572c. Mar 12 01:36:29.840199 containerd[1454]: time="2026-03-12T01:36:29.839704145Z" level=info msg="StartContainer for \"21c62481816242204e222acae4f036e5003bc5ba7ce6fa69428edfced690572c\" returns successfully" Mar 12 01:36:29.943412 kubelet[2528]: E0312 01:36:29.942071 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:36:30.276471 systemd-networkd[1389]: vxlan.calico: Gained IPv6LL Mar 12 01:36:30.948856 kubelet[2528]: I0312 01:36:30.948536 2528 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 01:36:31.047461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2913288541.mount: Deactivated successfully. Mar 12 01:36:31.597692 containerd[1454]: time="2026-03-12T01:36:31.597603683Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:36:31.598986 containerd[1454]: time="2026-03-12T01:36:31.598931952Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 12 01:36:31.600727 containerd[1454]: time="2026-03-12T01:36:31.600661890Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:36:31.604426 containerd[1454]: time="2026-03-12T01:36:31.604379112Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:36:31.605238 containerd[1454]: time="2026-03-12T01:36:31.605168468Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 1.916761449s" Mar 12 01:36:31.605238 containerd[1454]: time="2026-03-12T01:36:31.605216257Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 12 01:36:31.607027 containerd[1454]: time="2026-03-12T01:36:31.606798694Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 12 01:36:31.611238 containerd[1454]: time="2026-03-12T01:36:31.611194631Z" level=info msg="CreateContainer within sandbox \"22e2b072737c162658b9f4cc1db134e195370c4046fe102fa3028d88f443cd57\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 12 01:36:31.633418 containerd[1454]: time="2026-03-12T01:36:31.633177203Z" level=info msg="CreateContainer within sandbox \"22e2b072737c162658b9f4cc1db134e195370c4046fe102fa3028d88f443cd57\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"bbe7a7642c2a814c94acb640f80511aadac6b01dd81632282da1f88e4386b7d1\"" Mar 12 01:36:31.634236 containerd[1454]: time="2026-03-12T01:36:31.634193475Z" level=info msg="StartContainer for \"bbe7a7642c2a814c94acb640f80511aadac6b01dd81632282da1f88e4386b7d1\"" Mar 12 01:36:31.690529 systemd[1]: Started cri-containerd-bbe7a7642c2a814c94acb640f80511aadac6b01dd81632282da1f88e4386b7d1.scope - libcontainer container bbe7a7642c2a814c94acb640f80511aadac6b01dd81632282da1f88e4386b7d1. Mar 12 01:36:31.753320 containerd[1454]: time="2026-03-12T01:36:31.751720997Z" level=info msg="StartContainer for \"bbe7a7642c2a814c94acb640f80511aadac6b01dd81632282da1f88e4386b7d1\" returns successfully" Mar 12 01:36:31.974210 kubelet[2528]: I0312 01:36:31.972254 2528 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-5fcfc6547b-67zph" podStartSLOduration=29.133249574 podStartE2EDuration="31.972234114s" podCreationTimestamp="2026-03-12 01:36:00 +0000 UTC" firstStartedPulling="2026-03-12 01:36:26.848661395 +0000 UTC m=+45.168507166" lastFinishedPulling="2026-03-12 01:36:29.687645935 +0000 UTC m=+48.007491706" observedRunningTime="2026-03-12 01:36:29.961063295 +0000 UTC m=+48.280909146" watchObservedRunningTime="2026-03-12 01:36:31.972234114 +0000 UTC m=+50.292079915" Mar 12 01:36:31.974210 kubelet[2528]: I0312 01:36:31.972660 2528 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-jqpbf" podStartSLOduration=27.550881413 podStartE2EDuration="31.972651643s" podCreationTimestamp="2026-03-12 01:36:00 +0000 UTC" firstStartedPulling="2026-03-12 01:36:27.184735795 +0000 UTC m=+45.504581576" lastFinishedPulling="2026-03-12 01:36:31.606506035 +0000 UTC m=+49.926351806" observedRunningTime="2026-03-12 01:36:31.971838625 +0000 UTC m=+50.291684466" watchObservedRunningTime="2026-03-12 01:36:31.972651643 +0000 UTC m=+50.292497414" Mar 12 01:36:32.048810 systemd[1]: run-containerd-runc-k8s.io-bbe7a7642c2a814c94acb640f80511aadac6b01dd81632282da1f88e4386b7d1-runc.A8kOot.mount: Deactivated successfully. Mar 12 01:36:32.209172 containerd[1454]: time="2026-03-12T01:36:32.209060837Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:36:32.210368 containerd[1454]: time="2026-03-12T01:36:32.210308619Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 12 01:36:32.212159 containerd[1454]: time="2026-03-12T01:36:32.212087547Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:36:32.216649 containerd[1454]: time="2026-03-12T01:36:32.216573471Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 609.730745ms" Mar 12 01:36:32.216649 containerd[1454]: time="2026-03-12T01:36:32.216630247Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 12 01:36:32.218875 containerd[1454]: time="2026-03-12T01:36:32.217997660Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 12 01:36:32.222195 containerd[1454]: time="2026-03-12T01:36:32.221658525Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:36:32.224721 containerd[1454]: time="2026-03-12T01:36:32.224552198Z" level=info msg="CreateContainer within sandbox \"530f97b39787a0f72b711287f3ccfcf4d2c21bec69731ed9472b93d6cd8a35a0\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 12 01:36:32.249938 containerd[1454]: time="2026-03-12T01:36:32.249842597Z" level=info msg="CreateContainer within sandbox \"530f97b39787a0f72b711287f3ccfcf4d2c21bec69731ed9472b93d6cd8a35a0\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"5eb0e22328f906accd3c75f0e3c523bc5b60efe33b7bb3c6cea48878d90756ea\"" Mar 12 01:36:32.252688 containerd[1454]: time="2026-03-12T01:36:32.252589358Z" level=info msg="StartContainer for \"5eb0e22328f906accd3c75f0e3c523bc5b60efe33b7bb3c6cea48878d90756ea\"" Mar 12 01:36:32.306523 systemd[1]: Started cri-containerd-5eb0e22328f906accd3c75f0e3c523bc5b60efe33b7bb3c6cea48878d90756ea.scope - libcontainer container 5eb0e22328f906accd3c75f0e3c523bc5b60efe33b7bb3c6cea48878d90756ea. Mar 12 01:36:32.355865 containerd[1454]: time="2026-03-12T01:36:32.355669313Z" level=info msg="StartContainer for \"5eb0e22328f906accd3c75f0e3c523bc5b60efe33b7bb3c6cea48878d90756ea\" returns successfully" Mar 12 01:36:34.922983 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1905665009.mount: Deactivated successfully. Mar 12 01:36:35.016856 containerd[1454]: time="2026-03-12T01:36:35.016548746Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:36:35.018053 containerd[1454]: time="2026-03-12T01:36:35.017987822Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 12 01:36:35.021725 containerd[1454]: time="2026-03-12T01:36:35.021644630Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:36:35.025487 containerd[1454]: time="2026-03-12T01:36:35.025430165Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:36:35.026674 containerd[1454]: time="2026-03-12T01:36:35.026578390Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 2.808515669s" Mar 12 01:36:35.026674 containerd[1454]: time="2026-03-12T01:36:35.026639815Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 12 01:36:35.028454 containerd[1454]: time="2026-03-12T01:36:35.028397158Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 12 01:36:35.032991 containerd[1454]: time="2026-03-12T01:36:35.032912969Z" level=info msg="CreateContainer within sandbox \"592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 12 01:36:35.055835 containerd[1454]: time="2026-03-12T01:36:35.055747630Z" level=info msg="CreateContainer within sandbox \"592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"ee562f1f3ac92c6665481c24cde2078bbd7c1db7962a6b8bf442d2217bfc09dd\"" Mar 12 01:36:35.056807 containerd[1454]: time="2026-03-12T01:36:35.056684554Z" level=info msg="StartContainer for \"ee562f1f3ac92c6665481c24cde2078bbd7c1db7962a6b8bf442d2217bfc09dd\"" Mar 12 01:36:35.126518 systemd[1]: Started cri-containerd-ee562f1f3ac92c6665481c24cde2078bbd7c1db7962a6b8bf442d2217bfc09dd.scope - libcontainer container ee562f1f3ac92c6665481c24cde2078bbd7c1db7962a6b8bf442d2217bfc09dd. Mar 12 01:36:35.265957 containerd[1454]: time="2026-03-12T01:36:35.265738925Z" level=info msg="StartContainer for \"ee562f1f3ac92c6665481c24cde2078bbd7c1db7962a6b8bf442d2217bfc09dd\" returns successfully" Mar 12 01:36:35.747238 containerd[1454]: time="2026-03-12T01:36:35.747102386Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:36:35.748217 containerd[1454]: time="2026-03-12T01:36:35.748114679Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 12 01:36:35.749829 containerd[1454]: time="2026-03-12T01:36:35.749764489Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:36:35.753732 containerd[1454]: time="2026-03-12T01:36:35.753652547Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:36:35.754853 containerd[1454]: time="2026-03-12T01:36:35.754799259Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 726.351536ms" Mar 12 01:36:35.754853 containerd[1454]: time="2026-03-12T01:36:35.754834935Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 12 01:36:35.779112 containerd[1454]: time="2026-03-12T01:36:35.779023726Z" level=info msg="CreateContainer within sandbox \"530f97b39787a0f72b711287f3ccfcf4d2c21bec69731ed9472b93d6cd8a35a0\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 12 01:36:35.801367 containerd[1454]: time="2026-03-12T01:36:35.801236511Z" level=info msg="CreateContainer within sandbox \"530f97b39787a0f72b711287f3ccfcf4d2c21bec69731ed9472b93d6cd8a35a0\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b8935a860c44d065c0d97dbca2090467f15c1886c157fb38574b4ca4257d8a98\"" Mar 12 01:36:35.801979 containerd[1454]: time="2026-03-12T01:36:35.801908735Z" level=info msg="StartContainer for \"b8935a860c44d065c0d97dbca2090467f15c1886c157fb38574b4ca4257d8a98\"" Mar 12 01:36:35.864578 systemd[1]: Started cri-containerd-b8935a860c44d065c0d97dbca2090467f15c1886c157fb38574b4ca4257d8a98.scope - libcontainer container b8935a860c44d065c0d97dbca2090467f15c1886c157fb38574b4ca4257d8a98. Mar 12 01:36:35.897114 containerd[1454]: time="2026-03-12T01:36:35.896666997Z" level=info msg="StopPodSandbox for \"8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6\"" Mar 12 01:36:35.897697 containerd[1454]: time="2026-03-12T01:36:35.897486885Z" level=info msg="StopPodSandbox for \"3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765\"" Mar 12 01:36:35.908746 containerd[1454]: time="2026-03-12T01:36:35.908692427Z" level=info msg="StartContainer for \"b8935a860c44d065c0d97dbca2090467f15c1886c157fb38574b4ca4257d8a98\" returns successfully" Mar 12 01:36:36.035404 kubelet[2528]: I0312 01:36:36.035033 2528 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-58846f54f5-7rsmh" podStartSLOduration=19.440325715 podStartE2EDuration="28.03501083s" podCreationTimestamp="2026-03-12 01:36:08 +0000 UTC" firstStartedPulling="2026-03-12 01:36:26.432988527 +0000 UTC m=+44.752834298" lastFinishedPulling="2026-03-12 01:36:35.027673642 +0000 UTC m=+53.347519413" observedRunningTime="2026-03-12 01:36:36.008775873 +0000 UTC m=+54.328621665" watchObservedRunningTime="2026-03-12 01:36:36.03501083 +0000 UTC m=+54.354856631" Mar 12 01:36:36.036090 kubelet[2528]: I0312 01:36:36.035706 2528 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-fz78r" podStartSLOduration=26.463944126 podStartE2EDuration="35.035698358s" podCreationTimestamp="2026-03-12 01:36:01 +0000 UTC" firstStartedPulling="2026-03-12 01:36:27.202796198 +0000 UTC m=+45.522641979" lastFinishedPulling="2026-03-12 01:36:35.77455044 +0000 UTC m=+54.094396211" observedRunningTime="2026-03-12 01:36:36.034622285 +0000 UTC m=+54.354468066" watchObservedRunningTime="2026-03-12 01:36:36.035698358 +0000 UTC m=+54.355544149" Mar 12 01:36:36.057531 containerd[1454]: time="2026-03-12T01:36:36.057433152Z" level=info msg="StopContainer for \"c1077a845444c615464fe32ab04d96ab28bde09b6853c9dea1bf7f67f317e766\" with timeout 30 (s)" Mar 12 01:36:36.060404 containerd[1454]: time="2026-03-12T01:36:36.060315622Z" level=info msg="StopContainer for \"ee562f1f3ac92c6665481c24cde2078bbd7c1db7962a6b8bf442d2217bfc09dd\" with timeout 30 (s)" Mar 12 01:36:36.064092 containerd[1454]: time="2026-03-12T01:36:36.064020487Z" level=info msg="Stop container \"ee562f1f3ac92c6665481c24cde2078bbd7c1db7962a6b8bf442d2217bfc09dd\" with signal terminated" Mar 12 01:36:36.065849 containerd[1454]: time="2026-03-12T01:36:36.065630865Z" level=info msg="Stop container \"c1077a845444c615464fe32ab04d96ab28bde09b6853c9dea1bf7f67f317e766\" with signal terminated" Mar 12 01:36:36.088616 kubelet[2528]: I0312 01:36:36.088530 2528 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 12 01:36:36.089951 systemd[1]: cri-containerd-ee562f1f3ac92c6665481c24cde2078bbd7c1db7962a6b8bf442d2217bfc09dd.scope: Deactivated successfully. Mar 12 01:36:36.091004 kubelet[2528]: I0312 01:36:36.090909 2528 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 12 01:36:36.105572 systemd[1]: cri-containerd-c1077a845444c615464fe32ab04d96ab28bde09b6853c9dea1bf7f67f317e766.scope: Deactivated successfully. Mar 12 01:36:36.126352 containerd[1454]: 2026-03-12 01:36:36.004 [INFO][4919] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6" Mar 12 01:36:36.126352 containerd[1454]: 2026-03-12 01:36:36.009 [INFO][4919] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6" iface="eth0" netns="/var/run/netns/cni-d60a49a4-d4fe-4be1-604e-7fa6a5ba144c" Mar 12 01:36:36.126352 containerd[1454]: 2026-03-12 01:36:36.010 [INFO][4919] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6" iface="eth0" netns="/var/run/netns/cni-d60a49a4-d4fe-4be1-604e-7fa6a5ba144c" Mar 12 01:36:36.126352 containerd[1454]: 2026-03-12 01:36:36.011 [INFO][4919] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6" iface="eth0" netns="/var/run/netns/cni-d60a49a4-d4fe-4be1-604e-7fa6a5ba144c" Mar 12 01:36:36.126352 containerd[1454]: 2026-03-12 01:36:36.011 [INFO][4919] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6" Mar 12 01:36:36.126352 containerd[1454]: 2026-03-12 01:36:36.011 [INFO][4919] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6" Mar 12 01:36:36.126352 containerd[1454]: 2026-03-12 01:36:36.076 [INFO][4939] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6" HandleID="k8s-pod-network.8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6" Workload="localhost-k8s-calico--kube--controllers--7cf4cfd8c5--kpffb-eth0" Mar 12 01:36:36.126352 containerd[1454]: 2026-03-12 01:36:36.076 [INFO][4939] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:36:36.126352 containerd[1454]: 2026-03-12 01:36:36.077 [INFO][4939] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:36:36.126352 containerd[1454]: 2026-03-12 01:36:36.086 [WARNING][4939] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6" HandleID="k8s-pod-network.8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6" Workload="localhost-k8s-calico--kube--controllers--7cf4cfd8c5--kpffb-eth0" Mar 12 01:36:36.126352 containerd[1454]: 2026-03-12 01:36:36.086 [INFO][4939] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6" HandleID="k8s-pod-network.8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6" Workload="localhost-k8s-calico--kube--controllers--7cf4cfd8c5--kpffb-eth0" Mar 12 01:36:36.126352 containerd[1454]: 2026-03-12 01:36:36.091 [INFO][4939] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:36:36.126352 containerd[1454]: 2026-03-12 01:36:36.110 [INFO][4919] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6" Mar 12 01:36:36.130493 containerd[1454]: time="2026-03-12T01:36:36.126981614Z" level=info msg="TearDown network for sandbox \"8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6\" successfully" Mar 12 01:36:36.130493 containerd[1454]: time="2026-03-12T01:36:36.127014966Z" level=info msg="StopPodSandbox for \"8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6\" returns successfully" Mar 12 01:36:36.135238 systemd[1]: run-netns-cni\x2dd60a49a4\x2dd4fe\x2d4be1\x2d604e\x2d7fa6a5ba144c.mount: Deactivated successfully. Mar 12 01:36:36.139725 containerd[1454]: time="2026-03-12T01:36:36.139642350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cf4cfd8c5-kpffb,Uid:dd73580a-b13b-41aa-8b3e-da326a7dc9c7,Namespace:calico-system,Attempt:1,}" Mar 12 01:36:36.155040 containerd[1454]: 2026-03-12 01:36:36.013 [INFO][4914] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765" Mar 12 01:36:36.155040 containerd[1454]: 2026-03-12 01:36:36.017 [INFO][4914] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765" iface="eth0" netns="/var/run/netns/cni-26785f16-f503-6473-7ecd-b20805b1ba91" Mar 12 01:36:36.155040 containerd[1454]: 2026-03-12 01:36:36.018 [INFO][4914] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765" iface="eth0" netns="/var/run/netns/cni-26785f16-f503-6473-7ecd-b20805b1ba91" Mar 12 01:36:36.155040 containerd[1454]: 2026-03-12 01:36:36.019 [INFO][4914] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765" iface="eth0" netns="/var/run/netns/cni-26785f16-f503-6473-7ecd-b20805b1ba91" Mar 12 01:36:36.155040 containerd[1454]: 2026-03-12 01:36:36.019 [INFO][4914] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765" Mar 12 01:36:36.155040 containerd[1454]: 2026-03-12 01:36:36.019 [INFO][4914] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765" Mar 12 01:36:36.155040 containerd[1454]: 2026-03-12 01:36:36.111 [INFO][4945] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765" HandleID="k8s-pod-network.3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765" Workload="localhost-k8s-calico--apiserver--5fcfc6547b--f9sbm-eth0" Mar 12 01:36:36.155040 containerd[1454]: 2026-03-12 01:36:36.111 [INFO][4945] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:36:36.155040 containerd[1454]: 2026-03-12 01:36:36.111 [INFO][4945] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:36:36.155040 containerd[1454]: 2026-03-12 01:36:36.124 [WARNING][4945] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765" HandleID="k8s-pod-network.3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765" Workload="localhost-k8s-calico--apiserver--5fcfc6547b--f9sbm-eth0" Mar 12 01:36:36.155040 containerd[1454]: 2026-03-12 01:36:36.124 [INFO][4945] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765" HandleID="k8s-pod-network.3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765" Workload="localhost-k8s-calico--apiserver--5fcfc6547b--f9sbm-eth0" Mar 12 01:36:36.155040 containerd[1454]: 2026-03-12 01:36:36.133 [INFO][4945] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:36:36.155040 containerd[1454]: 2026-03-12 01:36:36.139 [INFO][4914] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765" Mar 12 01:36:36.155914 containerd[1454]: time="2026-03-12T01:36:36.155398215Z" level=info msg="TearDown network for sandbox \"3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765\" successfully" Mar 12 01:36:36.155914 containerd[1454]: time="2026-03-12T01:36:36.155421607Z" level=info msg="StopPodSandbox for \"3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765\" returns successfully" Mar 12 01:36:36.159818 containerd[1454]: time="2026-03-12T01:36:36.159177380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fcfc6547b-f9sbm,Uid:cc45d57a-2d65-4471-a944-16cc99da2325,Namespace:calico-system,Attempt:1,}" Mar 12 01:36:36.197986 containerd[1454]: time="2026-03-12T01:36:36.167590721Z" level=info msg="shim disconnected" id=c1077a845444c615464fe32ab04d96ab28bde09b6853c9dea1bf7f67f317e766 namespace=k8s.io Mar 12 01:36:36.197986 containerd[1454]: time="2026-03-12T01:36:36.197944509Z" level=warning msg="cleaning up after shim disconnected" id=c1077a845444c615464fe32ab04d96ab28bde09b6853c9dea1bf7f67f317e766 namespace=k8s.io Mar 12 01:36:36.197986 containerd[1454]: time="2026-03-12T01:36:36.197967831Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 01:36:36.219806 containerd[1454]: time="2026-03-12T01:36:36.219741538Z" level=info msg="shim disconnected" id=ee562f1f3ac92c6665481c24cde2078bbd7c1db7962a6b8bf442d2217bfc09dd namespace=k8s.io Mar 12 01:36:36.219806 containerd[1454]: time="2026-03-12T01:36:36.219790817Z" level=warning msg="cleaning up after shim disconnected" id=ee562f1f3ac92c6665481c24cde2078bbd7c1db7962a6b8bf442d2217bfc09dd namespace=k8s.io Mar 12 01:36:36.219806 containerd[1454]: time="2026-03-12T01:36:36.219799423Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 01:36:36.242540 containerd[1454]: time="2026-03-12T01:36:36.242444415Z" level=warning msg="cleanup warnings time=\"2026-03-12T01:36:36Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 12 01:36:36.274603 containerd[1454]: time="2026-03-12T01:36:36.274549306Z" level=info msg="StopContainer for \"c1077a845444c615464fe32ab04d96ab28bde09b6853c9dea1bf7f67f317e766\" returns successfully" Mar 12 01:36:36.276038 containerd[1454]: time="2026-03-12T01:36:36.275994193Z" level=info msg="StopContainer for \"ee562f1f3ac92c6665481c24cde2078bbd7c1db7962a6b8bf442d2217bfc09dd\" returns successfully" Mar 12 01:36:36.276574 containerd[1454]: time="2026-03-12T01:36:36.276496909Z" level=info msg="StopPodSandbox for \"592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f\"" Mar 12 01:36:36.286309 containerd[1454]: time="2026-03-12T01:36:36.286080806Z" level=info msg="Container to stop \"c1077a845444c615464fe32ab04d96ab28bde09b6853c9dea1bf7f67f317e766\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 12 01:36:36.286309 containerd[1454]: time="2026-03-12T01:36:36.286158889Z" level=info msg="Container to stop \"ee562f1f3ac92c6665481c24cde2078bbd7c1db7962a6b8bf442d2217bfc09dd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 12 01:36:36.301243 systemd[1]: cri-containerd-592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f.scope: Deactivated successfully. Mar 12 01:36:36.333648 containerd[1454]: time="2026-03-12T01:36:36.333539271Z" level=info msg="shim disconnected" id=592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f namespace=k8s.io Mar 12 01:36:36.333648 containerd[1454]: time="2026-03-12T01:36:36.333630109Z" level=warning msg="cleaning up after shim disconnected" id=592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f namespace=k8s.io Mar 12 01:36:36.333648 containerd[1454]: time="2026-03-12T01:36:36.333646679Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 01:36:36.466477 systemd-networkd[1389]: calid3607af6102: Link DOWN Mar 12 01:36:36.466510 systemd-networkd[1389]: calid3607af6102: Lost carrier Mar 12 01:36:36.493149 systemd-networkd[1389]: calicf153f298bc: Link UP Mar 12 01:36:36.493593 systemd-networkd[1389]: calicf153f298bc: Gained carrier Mar 12 01:36:36.516875 containerd[1454]: 2026-03-12 01:36:36.335 [INFO][5019] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7cf4cfd8c5--kpffb-eth0 calico-kube-controllers-7cf4cfd8c5- calico-system dd73580a-b13b-41aa-8b3e-da326a7dc9c7 1017 0 2026-03-12 01:36:01 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7cf4cfd8c5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7cf4cfd8c5-kpffb eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calicf153f298bc [] [] }} ContainerID="91fa645168cb0ee2a145b3a4044ea097e12f03d2ee22bf4f7ef26256923a9f4e" Namespace="calico-system" Pod="calico-kube-controllers-7cf4cfd8c5-kpffb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cf4cfd8c5--kpffb-" Mar 12 01:36:36.516875 containerd[1454]: 2026-03-12 01:36:36.336 [INFO][5019] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="91fa645168cb0ee2a145b3a4044ea097e12f03d2ee22bf4f7ef26256923a9f4e" Namespace="calico-system" Pod="calico-kube-controllers-7cf4cfd8c5-kpffb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cf4cfd8c5--kpffb-eth0" Mar 12 01:36:36.516875 containerd[1454]: 2026-03-12 01:36:36.406 [INFO][5080] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="91fa645168cb0ee2a145b3a4044ea097e12f03d2ee22bf4f7ef26256923a9f4e" HandleID="k8s-pod-network.91fa645168cb0ee2a145b3a4044ea097e12f03d2ee22bf4f7ef26256923a9f4e" Workload="localhost-k8s-calico--kube--controllers--7cf4cfd8c5--kpffb-eth0" Mar 12 01:36:36.516875 containerd[1454]: 2026-03-12 01:36:36.421 [INFO][5080] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="91fa645168cb0ee2a145b3a4044ea097e12f03d2ee22bf4f7ef26256923a9f4e" HandleID="k8s-pod-network.91fa645168cb0ee2a145b3a4044ea097e12f03d2ee22bf4f7ef26256923a9f4e" Workload="localhost-k8s-calico--kube--controllers--7cf4cfd8c5--kpffb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0006c4ae0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7cf4cfd8c5-kpffb", "timestamp":"2026-03-12 01:36:36.406829245 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00049c2c0)} Mar 12 01:36:36.516875 containerd[1454]: 2026-03-12 01:36:36.421 [INFO][5080] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:36:36.516875 containerd[1454]: 2026-03-12 01:36:36.421 [INFO][5080] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:36:36.516875 containerd[1454]: 2026-03-12 01:36:36.421 [INFO][5080] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:36:36.516875 containerd[1454]: 2026-03-12 01:36:36.425 [INFO][5080] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.91fa645168cb0ee2a145b3a4044ea097e12f03d2ee22bf4f7ef26256923a9f4e" host="localhost" Mar 12 01:36:36.516875 containerd[1454]: 2026-03-12 01:36:36.433 [INFO][5080] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:36:36.516875 containerd[1454]: 2026-03-12 01:36:36.441 [INFO][5080] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 12 01:36:36.516875 containerd[1454]: 2026-03-12 01:36:36.443 [INFO][5080] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:36:36.516875 containerd[1454]: 2026-03-12 01:36:36.446 [INFO][5080] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 12 01:36:36.516875 containerd[1454]: 2026-03-12 01:36:36.446 [INFO][5080] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.91fa645168cb0ee2a145b3a4044ea097e12f03d2ee22bf4f7ef26256923a9f4e" host="localhost" Mar 12 01:36:36.516875 containerd[1454]: 2026-03-12 01:36:36.451 [INFO][5080] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.91fa645168cb0ee2a145b3a4044ea097e12f03d2ee22bf4f7ef26256923a9f4e Mar 12 01:36:36.516875 containerd[1454]: 2026-03-12 01:36:36.457 [INFO][5080] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.91fa645168cb0ee2a145b3a4044ea097e12f03d2ee22bf4f7ef26256923a9f4e" host="localhost" Mar 12 01:36:36.516875 containerd[1454]: 2026-03-12 01:36:36.468 [INFO][5080] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.91fa645168cb0ee2a145b3a4044ea097e12f03d2ee22bf4f7ef26256923a9f4e" host="localhost" Mar 12 01:36:36.516875 containerd[1454]: 2026-03-12 01:36:36.468 [INFO][5080] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.91fa645168cb0ee2a145b3a4044ea097e12f03d2ee22bf4f7ef26256923a9f4e" host="localhost" Mar 12 01:36:36.516875 containerd[1454]: 2026-03-12 01:36:36.468 [INFO][5080] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:36:36.516875 containerd[1454]: 2026-03-12 01:36:36.468 [INFO][5080] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="91fa645168cb0ee2a145b3a4044ea097e12f03d2ee22bf4f7ef26256923a9f4e" HandleID="k8s-pod-network.91fa645168cb0ee2a145b3a4044ea097e12f03d2ee22bf4f7ef26256923a9f4e" Workload="localhost-k8s-calico--kube--controllers--7cf4cfd8c5--kpffb-eth0" Mar 12 01:36:36.519763 containerd[1454]: 2026-03-12 01:36:36.487 [INFO][5019] cni-plugin/k8s.go 418: Populated endpoint ContainerID="91fa645168cb0ee2a145b3a4044ea097e12f03d2ee22bf4f7ef26256923a9f4e" Namespace="calico-system" Pod="calico-kube-controllers-7cf4cfd8c5-kpffb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cf4cfd8c5--kpffb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7cf4cfd8c5--kpffb-eth0", GenerateName:"calico-kube-controllers-7cf4cfd8c5-", Namespace:"calico-system", SelfLink:"", UID:"dd73580a-b13b-41aa-8b3e-da326a7dc9c7", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 36, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cf4cfd8c5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7cf4cfd8c5-kpffb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicf153f298bc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:36:36.519763 containerd[1454]: 2026-03-12 01:36:36.487 [INFO][5019] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="91fa645168cb0ee2a145b3a4044ea097e12f03d2ee22bf4f7ef26256923a9f4e" Namespace="calico-system" Pod="calico-kube-controllers-7cf4cfd8c5-kpffb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cf4cfd8c5--kpffb-eth0" Mar 12 01:36:36.519763 containerd[1454]: 2026-03-12 01:36:36.487 [INFO][5019] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicf153f298bc ContainerID="91fa645168cb0ee2a145b3a4044ea097e12f03d2ee22bf4f7ef26256923a9f4e" Namespace="calico-system" Pod="calico-kube-controllers-7cf4cfd8c5-kpffb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cf4cfd8c5--kpffb-eth0" Mar 12 01:36:36.519763 containerd[1454]: 2026-03-12 01:36:36.493 [INFO][5019] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="91fa645168cb0ee2a145b3a4044ea097e12f03d2ee22bf4f7ef26256923a9f4e" Namespace="calico-system" Pod="calico-kube-controllers-7cf4cfd8c5-kpffb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cf4cfd8c5--kpffb-eth0" Mar 12 01:36:36.519763 containerd[1454]: 2026-03-12 01:36:36.494 [INFO][5019] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="91fa645168cb0ee2a145b3a4044ea097e12f03d2ee22bf4f7ef26256923a9f4e" Namespace="calico-system" Pod="calico-kube-controllers-7cf4cfd8c5-kpffb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cf4cfd8c5--kpffb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7cf4cfd8c5--kpffb-eth0", GenerateName:"calico-kube-controllers-7cf4cfd8c5-", Namespace:"calico-system", SelfLink:"", UID:"dd73580a-b13b-41aa-8b3e-da326a7dc9c7", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 36, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cf4cfd8c5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"91fa645168cb0ee2a145b3a4044ea097e12f03d2ee22bf4f7ef26256923a9f4e", Pod:"calico-kube-controllers-7cf4cfd8c5-kpffb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicf153f298bc", MAC:"22:07:b4:c8:ac:73", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:36:36.519763 containerd[1454]: 2026-03-12 01:36:36.508 [INFO][5019] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="91fa645168cb0ee2a145b3a4044ea097e12f03d2ee22bf4f7ef26256923a9f4e" Namespace="calico-system" Pod="calico-kube-controllers-7cf4cfd8c5-kpffb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cf4cfd8c5--kpffb-eth0" Mar 12 01:36:36.554934 containerd[1454]: time="2026-03-12T01:36:36.554455727Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:36:36.554934 containerd[1454]: time="2026-03-12T01:36:36.554583643Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:36:36.554934 containerd[1454]: time="2026-03-12T01:36:36.554608268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:36:36.561376 containerd[1454]: time="2026-03-12T01:36:36.558814481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:36:36.597627 systemd[1]: Started cri-containerd-91fa645168cb0ee2a145b3a4044ea097e12f03d2ee22bf4f7ef26256923a9f4e.scope - libcontainer container 91fa645168cb0ee2a145b3a4044ea097e12f03d2ee22bf4f7ef26256923a9f4e. Mar 12 01:36:36.613035 systemd-networkd[1389]: calif08faad35fc: Link UP Mar 12 01:36:36.615033 systemd-networkd[1389]: calif08faad35fc: Gained carrier Mar 12 01:36:36.641540 containerd[1454]: 2026-03-12 01:36:36.333 [INFO][5030] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5fcfc6547b--f9sbm-eth0 calico-apiserver-5fcfc6547b- calico-system cc45d57a-2d65-4471-a944-16cc99da2325 1016 0 2026-03-12 01:36:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5fcfc6547b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5fcfc6547b-f9sbm eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calif08faad35fc [] [] }} ContainerID="59341accd06ba9c99c42a5d457a67fff505d32fcc60ef1c1472d800926535e48" Namespace="calico-system" Pod="calico-apiserver-5fcfc6547b-f9sbm" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fcfc6547b--f9sbm-" Mar 12 01:36:36.641540 containerd[1454]: 2026-03-12 01:36:36.334 [INFO][5030] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="59341accd06ba9c99c42a5d457a67fff505d32fcc60ef1c1472d800926535e48" Namespace="calico-system" Pod="calico-apiserver-5fcfc6547b-f9sbm" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fcfc6547b--f9sbm-eth0" Mar 12 01:36:36.641540 containerd[1454]: 2026-03-12 01:36:36.407 [INFO][5071] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="59341accd06ba9c99c42a5d457a67fff505d32fcc60ef1c1472d800926535e48" HandleID="k8s-pod-network.59341accd06ba9c99c42a5d457a67fff505d32fcc60ef1c1472d800926535e48" Workload="localhost-k8s-calico--apiserver--5fcfc6547b--f9sbm-eth0" Mar 12 01:36:36.641540 containerd[1454]: 2026-03-12 01:36:36.423 [INFO][5071] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="59341accd06ba9c99c42a5d457a67fff505d32fcc60ef1c1472d800926535e48" HandleID="k8s-pod-network.59341accd06ba9c99c42a5d457a67fff505d32fcc60ef1c1472d800926535e48" Workload="localhost-k8s-calico--apiserver--5fcfc6547b--f9sbm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ad180), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-5fcfc6547b-f9sbm", "timestamp":"2026-03-12 01:36:36.407395316 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000fef20)} Mar 12 01:36:36.641540 containerd[1454]: 2026-03-12 01:36:36.424 [INFO][5071] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:36:36.641540 containerd[1454]: 2026-03-12 01:36:36.469 [INFO][5071] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:36:36.641540 containerd[1454]: 2026-03-12 01:36:36.469 [INFO][5071] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:36:36.641540 containerd[1454]: 2026-03-12 01:36:36.528 [INFO][5071] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.59341accd06ba9c99c42a5d457a67fff505d32fcc60ef1c1472d800926535e48" host="localhost" Mar 12 01:36:36.641540 containerd[1454]: 2026-03-12 01:36:36.543 [INFO][5071] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:36:36.641540 containerd[1454]: 2026-03-12 01:36:36.552 [INFO][5071] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 12 01:36:36.641540 containerd[1454]: 2026-03-12 01:36:36.557 [INFO][5071] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:36:36.641540 containerd[1454]: 2026-03-12 01:36:36.561 [INFO][5071] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 12 01:36:36.641540 containerd[1454]: 2026-03-12 01:36:36.561 [INFO][5071] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.59341accd06ba9c99c42a5d457a67fff505d32fcc60ef1c1472d800926535e48" host="localhost" Mar 12 01:36:36.641540 containerd[1454]: 2026-03-12 01:36:36.565 [INFO][5071] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.59341accd06ba9c99c42a5d457a67fff505d32fcc60ef1c1472d800926535e48 Mar 12 01:36:36.641540 containerd[1454]: 2026-03-12 01:36:36.574 [INFO][5071] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.59341accd06ba9c99c42a5d457a67fff505d32fcc60ef1c1472d800926535e48" host="localhost" Mar 12 01:36:36.641540 containerd[1454]: 2026-03-12 01:36:36.590 [INFO][5071] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.59341accd06ba9c99c42a5d457a67fff505d32fcc60ef1c1472d800926535e48" host="localhost" Mar 12 01:36:36.641540 containerd[1454]: 2026-03-12 01:36:36.591 [INFO][5071] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.59341accd06ba9c99c42a5d457a67fff505d32fcc60ef1c1472d800926535e48" host="localhost" Mar 12 01:36:36.641540 containerd[1454]: 2026-03-12 01:36:36.592 [INFO][5071] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:36:36.641540 containerd[1454]: 2026-03-12 01:36:36.592 [INFO][5071] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="59341accd06ba9c99c42a5d457a67fff505d32fcc60ef1c1472d800926535e48" HandleID="k8s-pod-network.59341accd06ba9c99c42a5d457a67fff505d32fcc60ef1c1472d800926535e48" Workload="localhost-k8s-calico--apiserver--5fcfc6547b--f9sbm-eth0" Mar 12 01:36:36.642565 containerd[1454]: 2026-03-12 01:36:36.596 [INFO][5030] cni-plugin/k8s.go 418: Populated endpoint ContainerID="59341accd06ba9c99c42a5d457a67fff505d32fcc60ef1c1472d800926535e48" Namespace="calico-system" Pod="calico-apiserver-5fcfc6547b-f9sbm" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fcfc6547b--f9sbm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5fcfc6547b--f9sbm-eth0", GenerateName:"calico-apiserver-5fcfc6547b-", Namespace:"calico-system", SelfLink:"", UID:"cc45d57a-2d65-4471-a944-16cc99da2325", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 36, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fcfc6547b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5fcfc6547b-f9sbm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif08faad35fc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:36:36.642565 containerd[1454]: 2026-03-12 01:36:36.596 [INFO][5030] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="59341accd06ba9c99c42a5d457a67fff505d32fcc60ef1c1472d800926535e48" Namespace="calico-system" Pod="calico-apiserver-5fcfc6547b-f9sbm" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fcfc6547b--f9sbm-eth0" Mar 12 01:36:36.642565 containerd[1454]: 2026-03-12 01:36:36.596 [INFO][5030] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif08faad35fc ContainerID="59341accd06ba9c99c42a5d457a67fff505d32fcc60ef1c1472d800926535e48" Namespace="calico-system" Pod="calico-apiserver-5fcfc6547b-f9sbm" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fcfc6547b--f9sbm-eth0" Mar 12 01:36:36.642565 containerd[1454]: 2026-03-12 01:36:36.616 [INFO][5030] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="59341accd06ba9c99c42a5d457a67fff505d32fcc60ef1c1472d800926535e48" Namespace="calico-system" Pod="calico-apiserver-5fcfc6547b-f9sbm" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fcfc6547b--f9sbm-eth0" Mar 12 01:36:36.642565 containerd[1454]: 2026-03-12 01:36:36.616 [INFO][5030] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="59341accd06ba9c99c42a5d457a67fff505d32fcc60ef1c1472d800926535e48" Namespace="calico-system" Pod="calico-apiserver-5fcfc6547b-f9sbm" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fcfc6547b--f9sbm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5fcfc6547b--f9sbm-eth0", GenerateName:"calico-apiserver-5fcfc6547b-", Namespace:"calico-system", SelfLink:"", UID:"cc45d57a-2d65-4471-a944-16cc99da2325", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 36, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fcfc6547b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"59341accd06ba9c99c42a5d457a67fff505d32fcc60ef1c1472d800926535e48", Pod:"calico-apiserver-5fcfc6547b-f9sbm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif08faad35fc", MAC:"12:d0:e8:ea:e7:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:36:36.642565 containerd[1454]: 2026-03-12 01:36:36.636 [INFO][5030] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="59341accd06ba9c99c42a5d457a67fff505d32fcc60ef1c1472d800926535e48" Namespace="calico-system" Pod="calico-apiserver-5fcfc6547b-f9sbm" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fcfc6547b--f9sbm-eth0" Mar 12 01:36:36.644448 systemd-resolved[1390]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:36:36.685562 containerd[1454]: 2026-03-12 01:36:36.463 [INFO][5102] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f" Mar 12 01:36:36.685562 containerd[1454]: 2026-03-12 01:36:36.464 [INFO][5102] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f" iface="eth0" netns="/var/run/netns/cni-f59fcbd1-67d6-89de-1260-026ab6857c0c" Mar 12 01:36:36.685562 containerd[1454]: 2026-03-12 01:36:36.465 [INFO][5102] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f" iface="eth0" netns="/var/run/netns/cni-f59fcbd1-67d6-89de-1260-026ab6857c0c" Mar 12 01:36:36.685562 containerd[1454]: 2026-03-12 01:36:36.484 [INFO][5102] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f" after=20.285963ms iface="eth0" netns="/var/run/netns/cni-f59fcbd1-67d6-89de-1260-026ab6857c0c" Mar 12 01:36:36.685562 containerd[1454]: 2026-03-12 01:36:36.484 [INFO][5102] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f" Mar 12 01:36:36.685562 containerd[1454]: 2026-03-12 01:36:36.484 [INFO][5102] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f" Mar 12 01:36:36.685562 containerd[1454]: 2026-03-12 01:36:36.537 [INFO][5121] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f" HandleID="k8s-pod-network.592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f" Workload="localhost-k8s-whisker--58846f54f5--7rsmh-eth0" Mar 12 01:36:36.685562 containerd[1454]: 2026-03-12 01:36:36.537 [INFO][5121] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:36:36.685562 containerd[1454]: 2026-03-12 01:36:36.592 [INFO][5121] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:36:36.685562 containerd[1454]: 2026-03-12 01:36:36.672 [INFO][5121] ipam/ipam_plugin.go 516: Released address using handleID ContainerID="592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f" HandleID="k8s-pod-network.592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f" Workload="localhost-k8s-whisker--58846f54f5--7rsmh-eth0" Mar 12 01:36:36.685562 containerd[1454]: 2026-03-12 01:36:36.672 [INFO][5121] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f" HandleID="k8s-pod-network.592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f" Workload="localhost-k8s-whisker--58846f54f5--7rsmh-eth0" Mar 12 01:36:36.685562 containerd[1454]: 2026-03-12 01:36:36.675 [INFO][5121] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:36:36.685562 containerd[1454]: 2026-03-12 01:36:36.680 [INFO][5102] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f" Mar 12 01:36:36.686159 containerd[1454]: time="2026-03-12T01:36:36.685671061Z" level=info msg="TearDown network for sandbox \"592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f\" successfully" Mar 12 01:36:36.686159 containerd[1454]: time="2026-03-12T01:36:36.685696568Z" level=info msg="StopPodSandbox for \"592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f\" returns successfully" Mar 12 01:36:36.686258 containerd[1454]: time="2026-03-12T01:36:36.686155694Z" level=info msg="StopPodSandbox for \"5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d\"" Mar 12 01:36:36.687890 containerd[1454]: time="2026-03-12T01:36:36.687679583Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:36:36.688478 containerd[1454]: time="2026-03-12T01:36:36.688400021Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:36:36.688530 containerd[1454]: time="2026-03-12T01:36:36.688481341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:36:36.688844 containerd[1454]: time="2026-03-12T01:36:36.688773540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:36:36.718748 systemd[1]: Started cri-containerd-59341accd06ba9c99c42a5d457a67fff505d32fcc60ef1c1472d800926535e48.scope - libcontainer container 59341accd06ba9c99c42a5d457a67fff505d32fcc60ef1c1472d800926535e48. Mar 12 01:36:36.726327 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee562f1f3ac92c6665481c24cde2078bbd7c1db7962a6b8bf442d2217bfc09dd-rootfs.mount: Deactivated successfully. Mar 12 01:36:36.726432 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1077a845444c615464fe32ab04d96ab28bde09b6853c9dea1bf7f67f317e766-rootfs.mount: Deactivated successfully. Mar 12 01:36:36.726503 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f-rootfs.mount: Deactivated successfully. Mar 12 01:36:36.726581 systemd[1]: run-netns-cni\x2df59fcbd1\x2d67d6\x2d89de\x2d1260\x2d026ab6857c0c.mount: Deactivated successfully. Mar 12 01:36:36.726646 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f-shm.mount: Deactivated successfully. Mar 12 01:36:36.726714 systemd[1]: run-netns-cni\x2d26785f16\x2df503\x2d6473\x2d7ecd\x2db20805b1ba91.mount: Deactivated successfully. Mar 12 01:36:36.747193 containerd[1454]: time="2026-03-12T01:36:36.746251414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cf4cfd8c5-kpffb,Uid:dd73580a-b13b-41aa-8b3e-da326a7dc9c7,Namespace:calico-system,Attempt:1,} returns sandbox id \"91fa645168cb0ee2a145b3a4044ea097e12f03d2ee22bf4f7ef26256923a9f4e\"" Mar 12 01:36:36.749592 systemd-resolved[1390]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:36:36.762851 containerd[1454]: time="2026-03-12T01:36:36.762785441Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 12 01:36:36.792343 containerd[1454]: time="2026-03-12T01:36:36.792203423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fcfc6547b-f9sbm,Uid:cc45d57a-2d65-4471-a944-16cc99da2325,Namespace:calico-system,Attempt:1,} returns sandbox id \"59341accd06ba9c99c42a5d457a67fff505d32fcc60ef1c1472d800926535e48\"" Mar 12 01:36:36.799679 containerd[1454]: time="2026-03-12T01:36:36.799249623Z" level=info msg="CreateContainer within sandbox \"59341accd06ba9c99c42a5d457a67fff505d32fcc60ef1c1472d800926535e48\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 12 01:36:36.828928 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1356636733.mount: Deactivated successfully. Mar 12 01:36:36.833797 containerd[1454]: time="2026-03-12T01:36:36.833731544Z" level=info msg="CreateContainer within sandbox \"59341accd06ba9c99c42a5d457a67fff505d32fcc60ef1c1472d800926535e48\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2061e307dba928da5a68513ba39c1dc8dd73762cac4e17f0a629cab6e90db8a2\"" Mar 12 01:36:36.835233 containerd[1454]: time="2026-03-12T01:36:36.834586330Z" level=info msg="StartContainer for \"2061e307dba928da5a68513ba39c1dc8dd73762cac4e17f0a629cab6e90db8a2\"" Mar 12 01:36:36.861562 containerd[1454]: 2026-03-12 01:36:36.782 [WARNING][5228] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--58846f54f5--7rsmh-eth0", GenerateName:"whisker-58846f54f5-", Namespace:"calico-system", SelfLink:"", UID:"5999426b-6640-4a93-bb5d-5d94700e760d", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 36, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"58846f54f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f", Pod:"whisker-58846f54f5-7rsmh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid3607af6102", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:36:36.861562 containerd[1454]: 2026-03-12 01:36:36.783 [INFO][5228] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d" Mar 12 01:36:36.861562 containerd[1454]: 2026-03-12 01:36:36.783 [INFO][5228] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d" iface="eth0" netns="" Mar 12 01:36:36.861562 containerd[1454]: 2026-03-12 01:36:36.783 [INFO][5228] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d" Mar 12 01:36:36.861562 containerd[1454]: 2026-03-12 01:36:36.783 [INFO][5228] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d" Mar 12 01:36:36.861562 containerd[1454]: 2026-03-12 01:36:36.832 [INFO][5265] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d" HandleID="k8s-pod-network.5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d" Workload="localhost-k8s-whisker--58846f54f5--7rsmh-eth0" Mar 12 01:36:36.861562 containerd[1454]: 2026-03-12 01:36:36.833 [INFO][5265] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:36:36.861562 containerd[1454]: 2026-03-12 01:36:36.833 [INFO][5265] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:36:36.861562 containerd[1454]: 2026-03-12 01:36:36.843 [WARNING][5265] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d" HandleID="k8s-pod-network.5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d" Workload="localhost-k8s-whisker--58846f54f5--7rsmh-eth0" Mar 12 01:36:36.861562 containerd[1454]: 2026-03-12 01:36:36.843 [INFO][5265] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d" HandleID="k8s-pod-network.5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d" Workload="localhost-k8s-whisker--58846f54f5--7rsmh-eth0" Mar 12 01:36:36.861562 containerd[1454]: 2026-03-12 01:36:36.846 [INFO][5265] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:36:36.861562 containerd[1454]: 2026-03-12 01:36:36.851 [INFO][5228] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d" Mar 12 01:36:36.862440 containerd[1454]: time="2026-03-12T01:36:36.862374883Z" level=info msg="TearDown network for sandbox \"5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d\" successfully" Mar 12 01:36:36.862440 containerd[1454]: time="2026-03-12T01:36:36.862426358Z" level=info msg="StopPodSandbox for \"5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d\" returns successfully" Mar 12 01:36:36.904669 systemd[1]: Started cri-containerd-2061e307dba928da5a68513ba39c1dc8dd73762cac4e17f0a629cab6e90db8a2.scope - libcontainer container 2061e307dba928da5a68513ba39c1dc8dd73762cac4e17f0a629cab6e90db8a2. Mar 12 01:36:36.918100 kubelet[2528]: I0312 01:36:36.918071 2528 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5999426b-6640-4a93-bb5d-5d94700e760d-whisker-backend-key-pair\") pod \"5999426b-6640-4a93-bb5d-5d94700e760d\" (UID: \"5999426b-6640-4a93-bb5d-5d94700e760d\") " Mar 12 01:36:36.918846 kubelet[2528]: I0312 01:36:36.918826 2528 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6v62l\" (UniqueName: \"kubernetes.io/projected/5999426b-6640-4a93-bb5d-5d94700e760d-kube-api-access-6v62l\") pod \"5999426b-6640-4a93-bb5d-5d94700e760d\" (UID: \"5999426b-6640-4a93-bb5d-5d94700e760d\") " Mar 12 01:36:36.919692 kubelet[2528]: I0312 01:36:36.919662 2528 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5999426b-6640-4a93-bb5d-5d94700e760d-whisker-ca-bundle\") pod \"5999426b-6640-4a93-bb5d-5d94700e760d\" (UID: \"5999426b-6640-4a93-bb5d-5d94700e760d\") " Mar 12 01:36:36.920390 kubelet[2528]: I0312 01:36:36.920374 2528 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/5999426b-6640-4a93-bb5d-5d94700e760d-nginx-config\") pod \"5999426b-6640-4a93-bb5d-5d94700e760d\" (UID: \"5999426b-6640-4a93-bb5d-5d94700e760d\") " Mar 12 01:36:36.921629 kubelet[2528]: I0312 01:36:36.920957 2528 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5999426b-6640-4a93-bb5d-5d94700e760d-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "5999426b-6640-4a93-bb5d-5d94700e760d" (UID: "5999426b-6640-4a93-bb5d-5d94700e760d"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 12 01:36:36.922141 kubelet[2528]: I0312 01:36:36.922060 2528 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5999426b-6640-4a93-bb5d-5d94700e760d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "5999426b-6640-4a93-bb5d-5d94700e760d" (UID: "5999426b-6640-4a93-bb5d-5d94700e760d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 12 01:36:36.923138 kubelet[2528]: I0312 01:36:36.923086 2528 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5999426b-6640-4a93-bb5d-5d94700e760d-kube-api-access-6v62l" (OuterVolumeSpecName: "kube-api-access-6v62l") pod "5999426b-6640-4a93-bb5d-5d94700e760d" (UID: "5999426b-6640-4a93-bb5d-5d94700e760d"). InnerVolumeSpecName "kube-api-access-6v62l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 12 01:36:36.923917 kubelet[2528]: I0312 01:36:36.923855 2528 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5999426b-6640-4a93-bb5d-5d94700e760d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "5999426b-6640-4a93-bb5d-5d94700e760d" (UID: "5999426b-6640-4a93-bb5d-5d94700e760d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 12 01:36:36.965391 containerd[1454]: time="2026-03-12T01:36:36.965314933Z" level=info msg="StartContainer for \"2061e307dba928da5a68513ba39c1dc8dd73762cac4e17f0a629cab6e90db8a2\" returns successfully" Mar 12 01:36:37.005060 kubelet[2528]: I0312 01:36:37.004948 2528 scope.go:117] "RemoveContainer" containerID="ee562f1f3ac92c6665481c24cde2078bbd7c1db7962a6b8bf442d2217bfc09dd" Mar 12 01:36:37.007137 containerd[1454]: time="2026-03-12T01:36:37.007011016Z" level=info msg="RemoveContainer for \"ee562f1f3ac92c6665481c24cde2078bbd7c1db7962a6b8bf442d2217bfc09dd\"" Mar 12 01:36:37.015500 systemd[1]: Removed slice kubepods-besteffort-pod5999426b_6640_4a93_bb5d_5d94700e760d.slice - libcontainer container kubepods-besteffort-pod5999426b_6640_4a93_bb5d_5d94700e760d.slice. Mar 12 01:36:37.021333 containerd[1454]: time="2026-03-12T01:36:37.021138166Z" level=info msg="RemoveContainer for \"ee562f1f3ac92c6665481c24cde2078bbd7c1db7962a6b8bf442d2217bfc09dd\" returns successfully" Mar 12 01:36:37.023700 kubelet[2528]: I0312 01:36:37.022626 2528 scope.go:117] "RemoveContainer" containerID="c1077a845444c615464fe32ab04d96ab28bde09b6853c9dea1bf7f67f317e766" Mar 12 01:36:37.023700 kubelet[2528]: I0312 01:36:37.022978 2528 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5999426b-6640-4a93-bb5d-5d94700e760d-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Mar 12 01:36:37.025698 kubelet[2528]: I0312 01:36:37.024473 2528 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6v62l\" (UniqueName: \"kubernetes.io/projected/5999426b-6640-4a93-bb5d-5d94700e760d-kube-api-access-6v62l\") on node \"localhost\" DevicePath \"\"" Mar 12 01:36:37.025698 kubelet[2528]: I0312 01:36:37.024750 2528 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5999426b-6640-4a93-bb5d-5d94700e760d-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Mar 12 01:36:37.025698 kubelet[2528]: I0312 01:36:37.025437 2528 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/5999426b-6640-4a93-bb5d-5d94700e760d-nginx-config\") on node \"localhost\" DevicePath \"\"" Mar 12 01:36:37.026733 containerd[1454]: time="2026-03-12T01:36:37.026679634Z" level=info msg="RemoveContainer for \"c1077a845444c615464fe32ab04d96ab28bde09b6853c9dea1bf7f67f317e766\"" Mar 12 01:36:37.032361 containerd[1454]: time="2026-03-12T01:36:37.032321697Z" level=info msg="RemoveContainer for \"c1077a845444c615464fe32ab04d96ab28bde09b6853c9dea1bf7f67f317e766\" returns successfully" Mar 12 01:36:37.032829 kubelet[2528]: I0312 01:36:37.032710 2528 scope.go:117] "RemoveContainer" containerID="ee562f1f3ac92c6665481c24cde2078bbd7c1db7962a6b8bf442d2217bfc09dd" Mar 12 01:36:37.038782 kubelet[2528]: I0312 01:36:37.037833 2528 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-5fcfc6547b-f9sbm" podStartSLOduration=37.037822521 podStartE2EDuration="37.037822521s" podCreationTimestamp="2026-03-12 01:36:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:36:37.019032453 +0000 UTC m=+55.338878244" watchObservedRunningTime="2026-03-12 01:36:37.037822521 +0000 UTC m=+55.357668292" Mar 12 01:36:37.049484 containerd[1454]: time="2026-03-12T01:36:37.038765888Z" level=error msg="ContainerStatus for \"ee562f1f3ac92c6665481c24cde2078bbd7c1db7962a6b8bf442d2217bfc09dd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ee562f1f3ac92c6665481c24cde2078bbd7c1db7962a6b8bf442d2217bfc09dd\": not found" Mar 12 01:36:37.049806 kubelet[2528]: E0312 01:36:37.049655 2528 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ee562f1f3ac92c6665481c24cde2078bbd7c1db7962a6b8bf442d2217bfc09dd\": not found" containerID="ee562f1f3ac92c6665481c24cde2078bbd7c1db7962a6b8bf442d2217bfc09dd" Mar 12 01:36:37.049806 kubelet[2528]: I0312 01:36:37.049692 2528 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ee562f1f3ac92c6665481c24cde2078bbd7c1db7962a6b8bf442d2217bfc09dd"} err="failed to get container status \"ee562f1f3ac92c6665481c24cde2078bbd7c1db7962a6b8bf442d2217bfc09dd\": rpc error: code = NotFound desc = an error occurred when try to find container \"ee562f1f3ac92c6665481c24cde2078bbd7c1db7962a6b8bf442d2217bfc09dd\": not found" Mar 12 01:36:37.049806 kubelet[2528]: I0312 01:36:37.049714 2528 scope.go:117] "RemoveContainer" containerID="c1077a845444c615464fe32ab04d96ab28bde09b6853c9dea1bf7f67f317e766" Mar 12 01:36:37.050674 containerd[1454]: time="2026-03-12T01:36:37.050501721Z" level=error msg="ContainerStatus for \"c1077a845444c615464fe32ab04d96ab28bde09b6853c9dea1bf7f67f317e766\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c1077a845444c615464fe32ab04d96ab28bde09b6853c9dea1bf7f67f317e766\": not found" Mar 12 01:36:37.050995 kubelet[2528]: E0312 01:36:37.050849 2528 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c1077a845444c615464fe32ab04d96ab28bde09b6853c9dea1bf7f67f317e766\": not found" containerID="c1077a845444c615464fe32ab04d96ab28bde09b6853c9dea1bf7f67f317e766" Mar 12 01:36:37.050995 kubelet[2528]: I0312 01:36:37.050966 2528 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c1077a845444c615464fe32ab04d96ab28bde09b6853c9dea1bf7f67f317e766"} err="failed to get container status \"c1077a845444c615464fe32ab04d96ab28bde09b6853c9dea1bf7f67f317e766\": rpc error: code = NotFound desc = an error occurred when try to find container \"c1077a845444c615464fe32ab04d96ab28bde09b6853c9dea1bf7f67f317e766\": not found" Mar 12 01:36:37.051081 kubelet[2528]: I0312 01:36:37.051039 2528 scope.go:117] "RemoveContainer" containerID="ee562f1f3ac92c6665481c24cde2078bbd7c1db7962a6b8bf442d2217bfc09dd" Mar 12 01:36:37.052054 containerd[1454]: time="2026-03-12T01:36:37.051720539Z" level=error msg="ContainerStatus for \"ee562f1f3ac92c6665481c24cde2078bbd7c1db7962a6b8bf442d2217bfc09dd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ee562f1f3ac92c6665481c24cde2078bbd7c1db7962a6b8bf442d2217bfc09dd\": not found" Mar 12 01:36:37.052093 kubelet[2528]: I0312 01:36:37.051944 2528 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ee562f1f3ac92c6665481c24cde2078bbd7c1db7962a6b8bf442d2217bfc09dd"} err="failed to get container status \"ee562f1f3ac92c6665481c24cde2078bbd7c1db7962a6b8bf442d2217bfc09dd\": rpc error: code = NotFound desc = an error occurred when try to find container \"ee562f1f3ac92c6665481c24cde2078bbd7c1db7962a6b8bf442d2217bfc09dd\": not found" Mar 12 01:36:37.052093 kubelet[2528]: I0312 01:36:37.051963 2528 scope.go:117] "RemoveContainer" containerID="c1077a845444c615464fe32ab04d96ab28bde09b6853c9dea1bf7f67f317e766" Mar 12 01:36:37.052630 containerd[1454]: time="2026-03-12T01:36:37.052460724Z" level=error msg="ContainerStatus for \"c1077a845444c615464fe32ab04d96ab28bde09b6853c9dea1bf7f67f317e766\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c1077a845444c615464fe32ab04d96ab28bde09b6853c9dea1bf7f67f317e766\": not found" Mar 12 01:36:37.053055 kubelet[2528]: I0312 01:36:37.052724 2528 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c1077a845444c615464fe32ab04d96ab28bde09b6853c9dea1bf7f67f317e766"} err="failed to get container status \"c1077a845444c615464fe32ab04d96ab28bde09b6853c9dea1bf7f67f317e766\": rpc error: code = NotFound desc = an error occurred when try to find container \"c1077a845444c615464fe32ab04d96ab28bde09b6853c9dea1bf7f67f317e766\": not found" Mar 12 01:36:37.155713 systemd[1]: Created slice kubepods-besteffort-poda3f7abce_b63c_4326_85eb_78c593e87d69.slice - libcontainer container kubepods-besteffort-poda3f7abce_b63c_4326_85eb_78c593e87d69.slice. Mar 12 01:36:37.226387 kubelet[2528]: I0312 01:36:37.226195 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/a3f7abce-b63c-4326-85eb-78c593e87d69-nginx-config\") pod \"whisker-7587d9cd96-xwzrx\" (UID: \"a3f7abce-b63c-4326-85eb-78c593e87d69\") " pod="calico-system/whisker-7587d9cd96-xwzrx" Mar 12 01:36:37.226387 kubelet[2528]: I0312 01:36:37.226243 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3f7abce-b63c-4326-85eb-78c593e87d69-whisker-ca-bundle\") pod \"whisker-7587d9cd96-xwzrx\" (UID: \"a3f7abce-b63c-4326-85eb-78c593e87d69\") " pod="calico-system/whisker-7587d9cd96-xwzrx" Mar 12 01:36:37.226387 kubelet[2528]: I0312 01:36:37.226299 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a3f7abce-b63c-4326-85eb-78c593e87d69-whisker-backend-key-pair\") pod \"whisker-7587d9cd96-xwzrx\" (UID: \"a3f7abce-b63c-4326-85eb-78c593e87d69\") " pod="calico-system/whisker-7587d9cd96-xwzrx" Mar 12 01:36:37.226387 kubelet[2528]: I0312 01:36:37.226317 2528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnvgs\" (UniqueName: \"kubernetes.io/projected/a3f7abce-b63c-4326-85eb-78c593e87d69-kube-api-access-jnvgs\") pod \"whisker-7587d9cd96-xwzrx\" (UID: \"a3f7abce-b63c-4326-85eb-78c593e87d69\") " pod="calico-system/whisker-7587d9cd96-xwzrx" Mar 12 01:36:37.467246 containerd[1454]: time="2026-03-12T01:36:37.466404833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7587d9cd96-xwzrx,Uid:a3f7abce-b63c-4326-85eb-78c593e87d69,Namespace:calico-system,Attempt:0,}" Mar 12 01:36:37.639209 systemd-networkd[1389]: cali1221108d75f: Link UP Mar 12 01:36:37.641971 systemd-networkd[1389]: cali1221108d75f: Gained carrier Mar 12 01:36:37.664499 containerd[1454]: 2026-03-12 01:36:37.549 [INFO][5325] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7587d9cd96--xwzrx-eth0 whisker-7587d9cd96- calico-system a3f7abce-b63c-4326-85eb-78c593e87d69 1056 0 2026-03-12 01:36:37 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7587d9cd96 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7587d9cd96-xwzrx eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali1221108d75f [] [] }} ContainerID="52148d8ee3a1c142ac1787f42064dd2e67d55b6df7b52e97e44f0f5ec7c94da4" Namespace="calico-system" Pod="whisker-7587d9cd96-xwzrx" WorkloadEndpoint="localhost-k8s-whisker--7587d9cd96--xwzrx-" Mar 12 01:36:37.664499 containerd[1454]: 2026-03-12 01:36:37.549 [INFO][5325] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="52148d8ee3a1c142ac1787f42064dd2e67d55b6df7b52e97e44f0f5ec7c94da4" Namespace="calico-system" Pod="whisker-7587d9cd96-xwzrx" WorkloadEndpoint="localhost-k8s-whisker--7587d9cd96--xwzrx-eth0" Mar 12 01:36:37.664499 containerd[1454]: 2026-03-12 01:36:37.590 [INFO][5340] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="52148d8ee3a1c142ac1787f42064dd2e67d55b6df7b52e97e44f0f5ec7c94da4" HandleID="k8s-pod-network.52148d8ee3a1c142ac1787f42064dd2e67d55b6df7b52e97e44f0f5ec7c94da4" Workload="localhost-k8s-whisker--7587d9cd96--xwzrx-eth0" Mar 12 01:36:37.664499 containerd[1454]: 2026-03-12 01:36:37.598 [INFO][5340] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="52148d8ee3a1c142ac1787f42064dd2e67d55b6df7b52e97e44f0f5ec7c94da4" HandleID="k8s-pod-network.52148d8ee3a1c142ac1787f42064dd2e67d55b6df7b52e97e44f0f5ec7c94da4" Workload="localhost-k8s-whisker--7587d9cd96--xwzrx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004826e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7587d9cd96-xwzrx", "timestamp":"2026-03-12 01:36:37.59090166 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003fa580)} Mar 12 01:36:37.664499 containerd[1454]: 2026-03-12 01:36:37.599 [INFO][5340] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:36:37.664499 containerd[1454]: 2026-03-12 01:36:37.599 [INFO][5340] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:36:37.664499 containerd[1454]: 2026-03-12 01:36:37.599 [INFO][5340] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:36:37.664499 containerd[1454]: 2026-03-12 01:36:37.602 [INFO][5340] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.52148d8ee3a1c142ac1787f42064dd2e67d55b6df7b52e97e44f0f5ec7c94da4" host="localhost" Mar 12 01:36:37.664499 containerd[1454]: 2026-03-12 01:36:37.607 [INFO][5340] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:36:37.664499 containerd[1454]: 2026-03-12 01:36:37.612 [INFO][5340] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 12 01:36:37.664499 containerd[1454]: 2026-03-12 01:36:37.614 [INFO][5340] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:36:37.664499 containerd[1454]: 2026-03-12 01:36:37.616 [INFO][5340] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 12 01:36:37.664499 containerd[1454]: 2026-03-12 01:36:37.616 [INFO][5340] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.52148d8ee3a1c142ac1787f42064dd2e67d55b6df7b52e97e44f0f5ec7c94da4" host="localhost" Mar 12 01:36:37.664499 containerd[1454]: 2026-03-12 01:36:37.618 [INFO][5340] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.52148d8ee3a1c142ac1787f42064dd2e67d55b6df7b52e97e44f0f5ec7c94da4 Mar 12 01:36:37.664499 containerd[1454]: 2026-03-12 01:36:37.622 [INFO][5340] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.52148d8ee3a1c142ac1787f42064dd2e67d55b6df7b52e97e44f0f5ec7c94da4" host="localhost" Mar 12 01:36:37.664499 containerd[1454]: 2026-03-12 01:36:37.631 [INFO][5340] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.52148d8ee3a1c142ac1787f42064dd2e67d55b6df7b52e97e44f0f5ec7c94da4" host="localhost" Mar 12 01:36:37.664499 containerd[1454]: 2026-03-12 01:36:37.631 [INFO][5340] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.52148d8ee3a1c142ac1787f42064dd2e67d55b6df7b52e97e44f0f5ec7c94da4" host="localhost" Mar 12 01:36:37.664499 containerd[1454]: 2026-03-12 01:36:37.631 [INFO][5340] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:36:37.664499 containerd[1454]: 2026-03-12 01:36:37.631 [INFO][5340] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="52148d8ee3a1c142ac1787f42064dd2e67d55b6df7b52e97e44f0f5ec7c94da4" HandleID="k8s-pod-network.52148d8ee3a1c142ac1787f42064dd2e67d55b6df7b52e97e44f0f5ec7c94da4" Workload="localhost-k8s-whisker--7587d9cd96--xwzrx-eth0" Mar 12 01:36:37.665239 containerd[1454]: 2026-03-12 01:36:37.635 [INFO][5325] cni-plugin/k8s.go 418: Populated endpoint ContainerID="52148d8ee3a1c142ac1787f42064dd2e67d55b6df7b52e97e44f0f5ec7c94da4" Namespace="calico-system" Pod="whisker-7587d9cd96-xwzrx" WorkloadEndpoint="localhost-k8s-whisker--7587d9cd96--xwzrx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7587d9cd96--xwzrx-eth0", GenerateName:"whisker-7587d9cd96-", Namespace:"calico-system", SelfLink:"", UID:"a3f7abce-b63c-4326-85eb-78c593e87d69", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 36, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7587d9cd96", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7587d9cd96-xwzrx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1221108d75f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:36:37.665239 containerd[1454]: 2026-03-12 01:36:37.635 [INFO][5325] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="52148d8ee3a1c142ac1787f42064dd2e67d55b6df7b52e97e44f0f5ec7c94da4" Namespace="calico-system" Pod="whisker-7587d9cd96-xwzrx" WorkloadEndpoint="localhost-k8s-whisker--7587d9cd96--xwzrx-eth0" Mar 12 01:36:37.665239 containerd[1454]: 2026-03-12 01:36:37.635 [INFO][5325] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1221108d75f ContainerID="52148d8ee3a1c142ac1787f42064dd2e67d55b6df7b52e97e44f0f5ec7c94da4" Namespace="calico-system" Pod="whisker-7587d9cd96-xwzrx" WorkloadEndpoint="localhost-k8s-whisker--7587d9cd96--xwzrx-eth0" Mar 12 01:36:37.665239 containerd[1454]: 2026-03-12 01:36:37.644 [INFO][5325] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="52148d8ee3a1c142ac1787f42064dd2e67d55b6df7b52e97e44f0f5ec7c94da4" Namespace="calico-system" Pod="whisker-7587d9cd96-xwzrx" WorkloadEndpoint="localhost-k8s-whisker--7587d9cd96--xwzrx-eth0" Mar 12 01:36:37.665239 containerd[1454]: 2026-03-12 01:36:37.646 [INFO][5325] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="52148d8ee3a1c142ac1787f42064dd2e67d55b6df7b52e97e44f0f5ec7c94da4" Namespace="calico-system" Pod="whisker-7587d9cd96-xwzrx" WorkloadEndpoint="localhost-k8s-whisker--7587d9cd96--xwzrx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7587d9cd96--xwzrx-eth0", GenerateName:"whisker-7587d9cd96-", Namespace:"calico-system", SelfLink:"", UID:"a3f7abce-b63c-4326-85eb-78c593e87d69", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 36, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7587d9cd96", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"52148d8ee3a1c142ac1787f42064dd2e67d55b6df7b52e97e44f0f5ec7c94da4", Pod:"whisker-7587d9cd96-xwzrx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1221108d75f", MAC:"ce:b2:4d:ff:07:90", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:36:37.665239 containerd[1454]: 2026-03-12 01:36:37.659 [INFO][5325] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="52148d8ee3a1c142ac1787f42064dd2e67d55b6df7b52e97e44f0f5ec7c94da4" Namespace="calico-system" Pod="whisker-7587d9cd96-xwzrx" WorkloadEndpoint="localhost-k8s-whisker--7587d9cd96--xwzrx-eth0" Mar 12 01:36:37.693435 containerd[1454]: time="2026-03-12T01:36:37.692986873Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:36:37.693435 containerd[1454]: time="2026-03-12T01:36:37.693213992Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:36:37.693435 containerd[1454]: time="2026-03-12T01:36:37.693323624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:36:37.697066 containerd[1454]: time="2026-03-12T01:36:37.695615269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:36:37.729998 systemd[1]: var-lib-kubelet-pods-5999426b\x2d6640\x2d4a93\x2dbb5d\x2d5d94700e760d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6v62l.mount: Deactivated successfully. Mar 12 01:36:37.730671 systemd[1]: var-lib-kubelet-pods-5999426b\x2d6640\x2d4a93\x2dbb5d\x2d5d94700e760d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 12 01:36:37.741497 systemd[1]: Started cri-containerd-52148d8ee3a1c142ac1787f42064dd2e67d55b6df7b52e97e44f0f5ec7c94da4.scope - libcontainer container 52148d8ee3a1c142ac1787f42064dd2e67d55b6df7b52e97e44f0f5ec7c94da4. Mar 12 01:36:37.767613 systemd-resolved[1390]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:36:37.810444 containerd[1454]: time="2026-03-12T01:36:37.810331759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7587d9cd96-xwzrx,Uid:a3f7abce-b63c-4326-85eb-78c593e87d69,Namespace:calico-system,Attempt:0,} returns sandbox id \"52148d8ee3a1c142ac1787f42064dd2e67d55b6df7b52e97e44f0f5ec7c94da4\"" Mar 12 01:36:37.818339 containerd[1454]: time="2026-03-12T01:36:37.818241001Z" level=info msg="CreateContainer within sandbox \"52148d8ee3a1c142ac1787f42064dd2e67d55b6df7b52e97e44f0f5ec7c94da4\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 12 01:36:37.838028 containerd[1454]: time="2026-03-12T01:36:37.837977130Z" level=info msg="CreateContainer within sandbox \"52148d8ee3a1c142ac1787f42064dd2e67d55b6df7b52e97e44f0f5ec7c94da4\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"14b75bc4e4cffd9b21fd7f033e183319dc3eceec3251a31cb66a7530924eacc6\"" Mar 12 01:36:37.839231 containerd[1454]: time="2026-03-12T01:36:37.839158610Z" level=info msg="StartContainer for \"14b75bc4e4cffd9b21fd7f033e183319dc3eceec3251a31cb66a7530924eacc6\"" Mar 12 01:36:37.898586 systemd[1]: Started cri-containerd-14b75bc4e4cffd9b21fd7f033e183319dc3eceec3251a31cb66a7530924eacc6.scope - libcontainer container 14b75bc4e4cffd9b21fd7f033e183319dc3eceec3251a31cb66a7530924eacc6. Mar 12 01:36:37.900859 kubelet[2528]: I0312 01:36:37.900776 2528 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5999426b-6640-4a93-bb5d-5d94700e760d" path="/var/lib/kubelet/pods/5999426b-6640-4a93-bb5d-5d94700e760d/volumes" Mar 12 01:36:37.963963 containerd[1454]: time="2026-03-12T01:36:37.963909163Z" level=info msg="StartContainer for \"14b75bc4e4cffd9b21fd7f033e183319dc3eceec3251a31cb66a7530924eacc6\" returns successfully" Mar 12 01:36:37.970777 containerd[1454]: time="2026-03-12T01:36:37.970610883Z" level=info msg="CreateContainer within sandbox \"52148d8ee3a1c142ac1787f42064dd2e67d55b6df7b52e97e44f0f5ec7c94da4\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 12 01:36:37.991021 containerd[1454]: time="2026-03-12T01:36:37.990793799Z" level=info msg="CreateContainer within sandbox \"52148d8ee3a1c142ac1787f42064dd2e67d55b6df7b52e97e44f0f5ec7c94da4\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"a96bb605657757c2006b862cb475b4d4ca20e68c3b56014fc7ebdae0f137c9e9\"" Mar 12 01:36:37.992573 containerd[1454]: time="2026-03-12T01:36:37.992536957Z" level=info msg="StartContainer for \"a96bb605657757c2006b862cb475b4d4ca20e68c3b56014fc7ebdae0f137c9e9\"" Mar 12 01:36:38.013376 kubelet[2528]: I0312 01:36:38.012799 2528 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 01:36:38.038445 systemd[1]: Started cri-containerd-a96bb605657757c2006b862cb475b4d4ca20e68c3b56014fc7ebdae0f137c9e9.scope - libcontainer container a96bb605657757c2006b862cb475b4d4ca20e68c3b56014fc7ebdae0f137c9e9. Mar 12 01:36:38.113358 containerd[1454]: time="2026-03-12T01:36:38.112687862Z" level=info msg="StartContainer for \"a96bb605657757c2006b862cb475b4d4ca20e68c3b56014fc7ebdae0f137c9e9\" returns successfully" Mar 12 01:36:38.198776 systemd-networkd[1389]: calicf153f298bc: Gained IPv6LL Mar 12 01:36:38.390495 systemd-networkd[1389]: calif08faad35fc: Gained IPv6LL Mar 12 01:36:38.904307 containerd[1454]: time="2026-03-12T01:36:38.904164178Z" level=info msg="StopPodSandbox for \"de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6\"" Mar 12 01:36:38.923402 systemd-networkd[1389]: cali1221108d75f: Gained IPv6LL Mar 12 01:36:39.037436 kubelet[2528]: I0312 01:36:39.037325 2528 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7587d9cd96-xwzrx" podStartSLOduration=2.037260675 podStartE2EDuration="2.037260675s" podCreationTimestamp="2026-03-12 01:36:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:36:39.037180953 +0000 UTC m=+57.357026764" watchObservedRunningTime="2026-03-12 01:36:39.037260675 +0000 UTC m=+57.357106456" Mar 12 01:36:39.116684 containerd[1454]: 2026-03-12 01:36:39.000 [INFO][5512] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6" Mar 12 01:36:39.116684 containerd[1454]: 2026-03-12 01:36:39.000 [INFO][5512] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6" iface="eth0" netns="/var/run/netns/cni-4c776783-e2d9-c0a1-7892-926977f57789" Mar 12 01:36:39.116684 containerd[1454]: 2026-03-12 01:36:39.001 [INFO][5512] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6" iface="eth0" netns="/var/run/netns/cni-4c776783-e2d9-c0a1-7892-926977f57789" Mar 12 01:36:39.116684 containerd[1454]: 2026-03-12 01:36:39.001 [INFO][5512] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6" iface="eth0" netns="/var/run/netns/cni-4c776783-e2d9-c0a1-7892-926977f57789" Mar 12 01:36:39.116684 containerd[1454]: 2026-03-12 01:36:39.001 [INFO][5512] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6" Mar 12 01:36:39.116684 containerd[1454]: 2026-03-12 01:36:39.001 [INFO][5512] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6" Mar 12 01:36:39.116684 containerd[1454]: 2026-03-12 01:36:39.095 [INFO][5521] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6" HandleID="k8s-pod-network.de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6" Workload="localhost-k8s-coredns--66bc5c9577--977fw-eth0" Mar 12 01:36:39.116684 containerd[1454]: 2026-03-12 01:36:39.096 [INFO][5521] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:36:39.116684 containerd[1454]: 2026-03-12 01:36:39.096 [INFO][5521] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:36:39.116684 containerd[1454]: 2026-03-12 01:36:39.106 [WARNING][5521] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6" HandleID="k8s-pod-network.de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6" Workload="localhost-k8s-coredns--66bc5c9577--977fw-eth0" Mar 12 01:36:39.116684 containerd[1454]: 2026-03-12 01:36:39.106 [INFO][5521] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6" HandleID="k8s-pod-network.de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6" Workload="localhost-k8s-coredns--66bc5c9577--977fw-eth0" Mar 12 01:36:39.116684 containerd[1454]: 2026-03-12 01:36:39.109 [INFO][5521] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:36:39.116684 containerd[1454]: 2026-03-12 01:36:39.112 [INFO][5512] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6" Mar 12 01:36:39.119431 containerd[1454]: time="2026-03-12T01:36:39.118238088Z" level=info msg="TearDown network for sandbox \"de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6\" successfully" Mar 12 01:36:39.119431 containerd[1454]: time="2026-03-12T01:36:39.119397417Z" level=info msg="StopPodSandbox for \"de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6\" returns successfully" Mar 12 01:36:39.121871 systemd[1]: run-netns-cni\x2d4c776783\x2de2d9\x2dc0a1\x2d7892\x2d926977f57789.mount: Deactivated successfully. Mar 12 01:36:39.123511 kubelet[2528]: E0312 01:36:39.123474 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:36:39.124323 containerd[1454]: time="2026-03-12T01:36:39.124196593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-977fw,Uid:87896f19-89d0-488e-a664-14de866626f3,Namespace:kube-system,Attempt:1,}" Mar 12 01:36:39.323644 systemd-networkd[1389]: cali14796ca2277: Link UP Mar 12 01:36:39.325744 systemd-networkd[1389]: cali14796ca2277: Gained carrier Mar 12 01:36:39.347899 containerd[1454]: 2026-03-12 01:36:39.221 [INFO][5535] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--977fw-eth0 coredns-66bc5c9577- kube-system 87896f19-89d0-488e-a664-14de866626f3 1071 0 2026-03-12 01:35:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-977fw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali14796ca2277 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="447057b40b704b8538d3d79fcba6a830ea9de45c4e2c071b9fde0880b7584b2a" Namespace="kube-system" Pod="coredns-66bc5c9577-977fw" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--977fw-" Mar 12 01:36:39.347899 containerd[1454]: 2026-03-12 01:36:39.221 [INFO][5535] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="447057b40b704b8538d3d79fcba6a830ea9de45c4e2c071b9fde0880b7584b2a" Namespace="kube-system" Pod="coredns-66bc5c9577-977fw" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--977fw-eth0" Mar 12 01:36:39.347899 containerd[1454]: 2026-03-12 01:36:39.255 [INFO][5560] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="447057b40b704b8538d3d79fcba6a830ea9de45c4e2c071b9fde0880b7584b2a" HandleID="k8s-pod-network.447057b40b704b8538d3d79fcba6a830ea9de45c4e2c071b9fde0880b7584b2a" Workload="localhost-k8s-coredns--66bc5c9577--977fw-eth0" Mar 12 01:36:39.347899 containerd[1454]: 2026-03-12 01:36:39.266 [INFO][5560] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="447057b40b704b8538d3d79fcba6a830ea9de45c4e2c071b9fde0880b7584b2a" HandleID="k8s-pod-network.447057b40b704b8538d3d79fcba6a830ea9de45c4e2c071b9fde0880b7584b2a" Workload="localhost-k8s-coredns--66bc5c9577--977fw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ee1d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-977fw", "timestamp":"2026-03-12 01:36:39.255210988 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002aa000)} Mar 12 01:36:39.347899 containerd[1454]: 2026-03-12 01:36:39.266 [INFO][5560] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:36:39.347899 containerd[1454]: 2026-03-12 01:36:39.266 [INFO][5560] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:36:39.347899 containerd[1454]: 2026-03-12 01:36:39.267 [INFO][5560] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:36:39.347899 containerd[1454]: 2026-03-12 01:36:39.271 [INFO][5560] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.447057b40b704b8538d3d79fcba6a830ea9de45c4e2c071b9fde0880b7584b2a" host="localhost" Mar 12 01:36:39.347899 containerd[1454]: 2026-03-12 01:36:39.282 [INFO][5560] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:36:39.347899 containerd[1454]: 2026-03-12 01:36:39.291 [INFO][5560] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 12 01:36:39.347899 containerd[1454]: 2026-03-12 01:36:39.294 [INFO][5560] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:36:39.347899 containerd[1454]: 2026-03-12 01:36:39.297 [INFO][5560] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 12 01:36:39.347899 containerd[1454]: 2026-03-12 01:36:39.298 [INFO][5560] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.447057b40b704b8538d3d79fcba6a830ea9de45c4e2c071b9fde0880b7584b2a" host="localhost" Mar 12 01:36:39.347899 containerd[1454]: 2026-03-12 01:36:39.300 [INFO][5560] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.447057b40b704b8538d3d79fcba6a830ea9de45c4e2c071b9fde0880b7584b2a Mar 12 01:36:39.347899 containerd[1454]: 2026-03-12 01:36:39.306 [INFO][5560] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.447057b40b704b8538d3d79fcba6a830ea9de45c4e2c071b9fde0880b7584b2a" host="localhost" Mar 12 01:36:39.347899 containerd[1454]: 2026-03-12 01:36:39.315 [INFO][5560] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.447057b40b704b8538d3d79fcba6a830ea9de45c4e2c071b9fde0880b7584b2a" host="localhost" Mar 12 01:36:39.347899 containerd[1454]: 2026-03-12 01:36:39.315 [INFO][5560] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.447057b40b704b8538d3d79fcba6a830ea9de45c4e2c071b9fde0880b7584b2a" host="localhost" Mar 12 01:36:39.347899 containerd[1454]: 2026-03-12 01:36:39.316 [INFO][5560] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:36:39.347899 containerd[1454]: 2026-03-12 01:36:39.316 [INFO][5560] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="447057b40b704b8538d3d79fcba6a830ea9de45c4e2c071b9fde0880b7584b2a" HandleID="k8s-pod-network.447057b40b704b8538d3d79fcba6a830ea9de45c4e2c071b9fde0880b7584b2a" Workload="localhost-k8s-coredns--66bc5c9577--977fw-eth0" Mar 12 01:36:39.348508 containerd[1454]: 2026-03-12 01:36:39.319 [INFO][5535] cni-plugin/k8s.go 418: Populated endpoint ContainerID="447057b40b704b8538d3d79fcba6a830ea9de45c4e2c071b9fde0880b7584b2a" Namespace="kube-system" Pod="coredns-66bc5c9577-977fw" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--977fw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--977fw-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"87896f19-89d0-488e-a664-14de866626f3", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 35, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-977fw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali14796ca2277", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:36:39.348508 containerd[1454]: 2026-03-12 01:36:39.319 [INFO][5535] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="447057b40b704b8538d3d79fcba6a830ea9de45c4e2c071b9fde0880b7584b2a" Namespace="kube-system" Pod="coredns-66bc5c9577-977fw" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--977fw-eth0" Mar 12 01:36:39.348508 containerd[1454]: 2026-03-12 01:36:39.319 [INFO][5535] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali14796ca2277 ContainerID="447057b40b704b8538d3d79fcba6a830ea9de45c4e2c071b9fde0880b7584b2a" Namespace="kube-system" Pod="coredns-66bc5c9577-977fw" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--977fw-eth0" Mar 12 01:36:39.348508 containerd[1454]: 2026-03-12 01:36:39.324 [INFO][5535] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="447057b40b704b8538d3d79fcba6a830ea9de45c4e2c071b9fde0880b7584b2a" Namespace="kube-system" Pod="coredns-66bc5c9577-977fw" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--977fw-eth0" Mar 12 01:36:39.348508 containerd[1454]: 2026-03-12 01:36:39.325 [INFO][5535] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="447057b40b704b8538d3d79fcba6a830ea9de45c4e2c071b9fde0880b7584b2a" Namespace="kube-system" Pod="coredns-66bc5c9577-977fw" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--977fw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--977fw-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"87896f19-89d0-488e-a664-14de866626f3", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 35, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"447057b40b704b8538d3d79fcba6a830ea9de45c4e2c071b9fde0880b7584b2a", Pod:"coredns-66bc5c9577-977fw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali14796ca2277", MAC:"fe:2c:7e:f6:33:a4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:36:39.348508 containerd[1454]: 2026-03-12 01:36:39.337 [INFO][5535] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="447057b40b704b8538d3d79fcba6a830ea9de45c4e2c071b9fde0880b7584b2a" Namespace="kube-system" Pod="coredns-66bc5c9577-977fw" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--977fw-eth0" Mar 12 01:36:39.376902 containerd[1454]: time="2026-03-12T01:36:39.376759973Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:36:39.376902 containerd[1454]: time="2026-03-12T01:36:39.376806669Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:36:39.376902 containerd[1454]: time="2026-03-12T01:36:39.376816808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:36:39.377914 containerd[1454]: time="2026-03-12T01:36:39.377137650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:36:39.398525 systemd[1]: Started cri-containerd-447057b40b704b8538d3d79fcba6a830ea9de45c4e2c071b9fde0880b7584b2a.scope - libcontainer container 447057b40b704b8538d3d79fcba6a830ea9de45c4e2c071b9fde0880b7584b2a. Mar 12 01:36:39.412209 systemd-resolved[1390]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:36:39.444195 containerd[1454]: time="2026-03-12T01:36:39.444112948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-977fw,Uid:87896f19-89d0-488e-a664-14de866626f3,Namespace:kube-system,Attempt:1,} returns sandbox id \"447057b40b704b8538d3d79fcba6a830ea9de45c4e2c071b9fde0880b7584b2a\"" Mar 12 01:36:39.445552 kubelet[2528]: E0312 01:36:39.445086 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:36:39.454411 containerd[1454]: time="2026-03-12T01:36:39.454244362Z" level=info msg="CreateContainer within sandbox \"447057b40b704b8538d3d79fcba6a830ea9de45c4e2c071b9fde0880b7584b2a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 12 01:36:39.478759 containerd[1454]: time="2026-03-12T01:36:39.478711439Z" level=info msg="CreateContainer within sandbox \"447057b40b704b8538d3d79fcba6a830ea9de45c4e2c071b9fde0880b7584b2a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5c4af52fa6502cccc6b7a3097a57a901476aad1259b669797c07782a6dbf0361\"" Mar 12 01:36:39.481168 containerd[1454]: time="2026-03-12T01:36:39.480630069Z" level=info msg="StartContainer for \"5c4af52fa6502cccc6b7a3097a57a901476aad1259b669797c07782a6dbf0361\"" Mar 12 01:36:39.524694 systemd[1]: Started cri-containerd-5c4af52fa6502cccc6b7a3097a57a901476aad1259b669797c07782a6dbf0361.scope - libcontainer container 5c4af52fa6502cccc6b7a3097a57a901476aad1259b669797c07782a6dbf0361. Mar 12 01:36:39.572423 containerd[1454]: time="2026-03-12T01:36:39.572361419Z" level=info msg="StartContainer for \"5c4af52fa6502cccc6b7a3097a57a901476aad1259b669797c07782a6dbf0361\" returns successfully" Mar 12 01:36:40.023514 kubelet[2528]: E0312 01:36:40.023436 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:36:40.033939 containerd[1454]: time="2026-03-12T01:36:40.032967175Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 12 01:36:40.045942 containerd[1454]: time="2026-03-12T01:36:40.045462080Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 3.282614304s" Mar 12 01:36:40.045942 containerd[1454]: time="2026-03-12T01:36:40.045534685Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 12 01:36:40.047864 containerd[1454]: time="2026-03-12T01:36:40.047776279Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:36:40.049375 containerd[1454]: time="2026-03-12T01:36:40.049210957Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:36:40.050650 containerd[1454]: time="2026-03-12T01:36:40.050591746Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:36:40.075679 containerd[1454]: time="2026-03-12T01:36:40.075581101Z" level=info msg="CreateContainer within sandbox \"91fa645168cb0ee2a145b3a4044ea097e12f03d2ee22bf4f7ef26256923a9f4e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 12 01:36:40.101630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3784123684.mount: Deactivated successfully. Mar 12 01:36:40.102805 containerd[1454]: time="2026-03-12T01:36:40.102746186Z" level=info msg="CreateContainer within sandbox \"91fa645168cb0ee2a145b3a4044ea097e12f03d2ee22bf4f7ef26256923a9f4e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"e7dc2a91f63c89097f6d8be75069e6ba821e8c0963c38a36eaa8be17e548149b\"" Mar 12 01:36:40.106295 containerd[1454]: time="2026-03-12T01:36:40.106193817Z" level=info msg="StartContainer for \"e7dc2a91f63c89097f6d8be75069e6ba821e8c0963c38a36eaa8be17e548149b\"" Mar 12 01:36:40.157636 systemd[1]: Started cri-containerd-e7dc2a91f63c89097f6d8be75069e6ba821e8c0963c38a36eaa8be17e548149b.scope - libcontainer container e7dc2a91f63c89097f6d8be75069e6ba821e8c0963c38a36eaa8be17e548149b. Mar 12 01:36:40.219976 containerd[1454]: time="2026-03-12T01:36:40.219469200Z" level=info msg="StartContainer for \"e7dc2a91f63c89097f6d8be75069e6ba821e8c0963c38a36eaa8be17e548149b\" returns successfully" Mar 12 01:36:40.694599 systemd-networkd[1389]: cali14796ca2277: Gained IPv6LL Mar 12 01:36:41.028882 kubelet[2528]: E0312 01:36:41.028569 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:36:41.043480 kubelet[2528]: I0312 01:36:41.043335 2528 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7cf4cfd8c5-kpffb" podStartSLOduration=36.740776997 podStartE2EDuration="40.04325669s" podCreationTimestamp="2026-03-12 01:36:01 +0000 UTC" firstStartedPulling="2026-03-12 01:36:36.748646098 +0000 UTC m=+55.068491879" lastFinishedPulling="2026-03-12 01:36:40.051125801 +0000 UTC m=+58.370971572" observedRunningTime="2026-03-12 01:36:41.041738232 +0000 UTC m=+59.361584003" watchObservedRunningTime="2026-03-12 01:36:41.04325669 +0000 UTC m=+59.363102461" Mar 12 01:36:41.044013 kubelet[2528]: I0312 01:36:41.043534 2528 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-977fw" podStartSLOduration=55.043524235 podStartE2EDuration="55.043524235s" podCreationTimestamp="2026-03-12 01:35:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:36:40.043149058 +0000 UTC m=+58.362994829" watchObservedRunningTime="2026-03-12 01:36:41.043524235 +0000 UTC m=+59.363370005" Mar 12 01:36:41.865735 containerd[1454]: time="2026-03-12T01:36:41.865675472Z" level=info msg="StopPodSandbox for \"fb1cd7440f10148d769f7cfb32edd3bae5a0dd31751fd65fcda0a737be129b62\"" Mar 12 01:36:41.988166 containerd[1454]: 2026-03-12 01:36:41.935 [WARNING][5762] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fb1cd7440f10148d769f7cfb32edd3bae5a0dd31751fd65fcda0a737be129b62" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--cccfbd5cf--jqpbf-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"8fe93df9-5176-42c7-b8e3-6176eea7ca40", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 36, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"22e2b072737c162658b9f4cc1db134e195370c4046fe102fa3028d88f443cd57", Pod:"goldmane-cccfbd5cf-jqpbf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibae3d8e948a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:36:41.988166 containerd[1454]: 2026-03-12 01:36:41.936 [INFO][5762] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fb1cd7440f10148d769f7cfb32edd3bae5a0dd31751fd65fcda0a737be129b62" Mar 12 01:36:41.988166 containerd[1454]: 2026-03-12 01:36:41.936 [INFO][5762] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fb1cd7440f10148d769f7cfb32edd3bae5a0dd31751fd65fcda0a737be129b62" iface="eth0" netns="" Mar 12 01:36:41.988166 containerd[1454]: 2026-03-12 01:36:41.936 [INFO][5762] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fb1cd7440f10148d769f7cfb32edd3bae5a0dd31751fd65fcda0a737be129b62" Mar 12 01:36:41.988166 containerd[1454]: 2026-03-12 01:36:41.936 [INFO][5762] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fb1cd7440f10148d769f7cfb32edd3bae5a0dd31751fd65fcda0a737be129b62" Mar 12 01:36:41.988166 containerd[1454]: 2026-03-12 01:36:41.972 [INFO][5772] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fb1cd7440f10148d769f7cfb32edd3bae5a0dd31751fd65fcda0a737be129b62" HandleID="k8s-pod-network.fb1cd7440f10148d769f7cfb32edd3bae5a0dd31751fd65fcda0a737be129b62" Workload="localhost-k8s-goldmane--cccfbd5cf--jqpbf-eth0" Mar 12 01:36:41.988166 containerd[1454]: 2026-03-12 01:36:41.973 [INFO][5772] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:36:41.988166 containerd[1454]: 2026-03-12 01:36:41.973 [INFO][5772] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:36:41.988166 containerd[1454]: 2026-03-12 01:36:41.979 [WARNING][5772] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fb1cd7440f10148d769f7cfb32edd3bae5a0dd31751fd65fcda0a737be129b62" HandleID="k8s-pod-network.fb1cd7440f10148d769f7cfb32edd3bae5a0dd31751fd65fcda0a737be129b62" Workload="localhost-k8s-goldmane--cccfbd5cf--jqpbf-eth0" Mar 12 01:36:41.988166 containerd[1454]: 2026-03-12 01:36:41.979 [INFO][5772] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fb1cd7440f10148d769f7cfb32edd3bae5a0dd31751fd65fcda0a737be129b62" HandleID="k8s-pod-network.fb1cd7440f10148d769f7cfb32edd3bae5a0dd31751fd65fcda0a737be129b62" Workload="localhost-k8s-goldmane--cccfbd5cf--jqpbf-eth0" Mar 12 01:36:41.988166 containerd[1454]: 2026-03-12 01:36:41.981 [INFO][5772] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:36:41.988166 containerd[1454]: 2026-03-12 01:36:41.985 [INFO][5762] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fb1cd7440f10148d769f7cfb32edd3bae5a0dd31751fd65fcda0a737be129b62" Mar 12 01:36:41.988166 containerd[1454]: time="2026-03-12T01:36:41.988085618Z" level=info msg="TearDown network for sandbox \"fb1cd7440f10148d769f7cfb32edd3bae5a0dd31751fd65fcda0a737be129b62\" successfully" Mar 12 01:36:41.988166 containerd[1454]: time="2026-03-12T01:36:41.988118117Z" level=info msg="StopPodSandbox for \"fb1cd7440f10148d769f7cfb32edd3bae5a0dd31751fd65fcda0a737be129b62\" returns successfully" Mar 12 01:36:41.989795 containerd[1454]: time="2026-03-12T01:36:41.989738912Z" level=info msg="RemovePodSandbox for \"fb1cd7440f10148d769f7cfb32edd3bae5a0dd31751fd65fcda0a737be129b62\"" Mar 12 01:36:41.991582 containerd[1454]: time="2026-03-12T01:36:41.991537403Z" level=info msg="Forcibly stopping sandbox \"fb1cd7440f10148d769f7cfb32edd3bae5a0dd31751fd65fcda0a737be129b62\"" Mar 12 01:36:42.030849 kubelet[2528]: E0312 01:36:42.030776 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:36:42.090362 containerd[1454]: 2026-03-12 01:36:42.038 [WARNING][5789] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fb1cd7440f10148d769f7cfb32edd3bae5a0dd31751fd65fcda0a737be129b62" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--cccfbd5cf--jqpbf-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"8fe93df9-5176-42c7-b8e3-6176eea7ca40", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 36, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"22e2b072737c162658b9f4cc1db134e195370c4046fe102fa3028d88f443cd57", Pod:"goldmane-cccfbd5cf-jqpbf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibae3d8e948a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:36:42.090362 containerd[1454]: 2026-03-12 01:36:42.038 [INFO][5789] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fb1cd7440f10148d769f7cfb32edd3bae5a0dd31751fd65fcda0a737be129b62" Mar 12 01:36:42.090362 containerd[1454]: 2026-03-12 01:36:42.038 [INFO][5789] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fb1cd7440f10148d769f7cfb32edd3bae5a0dd31751fd65fcda0a737be129b62" iface="eth0" netns="" Mar 12 01:36:42.090362 containerd[1454]: 2026-03-12 01:36:42.038 [INFO][5789] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fb1cd7440f10148d769f7cfb32edd3bae5a0dd31751fd65fcda0a737be129b62" Mar 12 01:36:42.090362 containerd[1454]: 2026-03-12 01:36:42.038 [INFO][5789] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fb1cd7440f10148d769f7cfb32edd3bae5a0dd31751fd65fcda0a737be129b62" Mar 12 01:36:42.090362 containerd[1454]: 2026-03-12 01:36:42.074 [INFO][5797] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fb1cd7440f10148d769f7cfb32edd3bae5a0dd31751fd65fcda0a737be129b62" HandleID="k8s-pod-network.fb1cd7440f10148d769f7cfb32edd3bae5a0dd31751fd65fcda0a737be129b62" Workload="localhost-k8s-goldmane--cccfbd5cf--jqpbf-eth0" Mar 12 01:36:42.090362 containerd[1454]: 2026-03-12 01:36:42.074 [INFO][5797] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:36:42.090362 containerd[1454]: 2026-03-12 01:36:42.074 [INFO][5797] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:36:42.090362 containerd[1454]: 2026-03-12 01:36:42.081 [WARNING][5797] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fb1cd7440f10148d769f7cfb32edd3bae5a0dd31751fd65fcda0a737be129b62" HandleID="k8s-pod-network.fb1cd7440f10148d769f7cfb32edd3bae5a0dd31751fd65fcda0a737be129b62" Workload="localhost-k8s-goldmane--cccfbd5cf--jqpbf-eth0" Mar 12 01:36:42.090362 containerd[1454]: 2026-03-12 01:36:42.082 [INFO][5797] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fb1cd7440f10148d769f7cfb32edd3bae5a0dd31751fd65fcda0a737be129b62" HandleID="k8s-pod-network.fb1cd7440f10148d769f7cfb32edd3bae5a0dd31751fd65fcda0a737be129b62" Workload="localhost-k8s-goldmane--cccfbd5cf--jqpbf-eth0" Mar 12 01:36:42.090362 containerd[1454]: 2026-03-12 01:36:42.084 [INFO][5797] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:36:42.090362 containerd[1454]: 2026-03-12 01:36:42.087 [INFO][5789] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fb1cd7440f10148d769f7cfb32edd3bae5a0dd31751fd65fcda0a737be129b62" Mar 12 01:36:42.091036 containerd[1454]: time="2026-03-12T01:36:42.090411473Z" level=info msg="TearDown network for sandbox \"fb1cd7440f10148d769f7cfb32edd3bae5a0dd31751fd65fcda0a737be129b62\" successfully" Mar 12 01:36:42.109843 containerd[1454]: time="2026-03-12T01:36:42.109735995Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fb1cd7440f10148d769f7cfb32edd3bae5a0dd31751fd65fcda0a737be129b62\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 01:36:42.109960 containerd[1454]: time="2026-03-12T01:36:42.109862148Z" level=info msg="RemovePodSandbox \"fb1cd7440f10148d769f7cfb32edd3bae5a0dd31751fd65fcda0a737be129b62\" returns successfully" Mar 12 01:36:42.110752 containerd[1454]: time="2026-03-12T01:36:42.110702299Z" level=info msg="StopPodSandbox for \"de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6\"" Mar 12 01:36:42.235810 containerd[1454]: 2026-03-12 01:36:42.170 [WARNING][5814] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--977fw-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"87896f19-89d0-488e-a664-14de866626f3", ResourceVersion:"1103", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 35, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"447057b40b704b8538d3d79fcba6a830ea9de45c4e2c071b9fde0880b7584b2a", Pod:"coredns-66bc5c9577-977fw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali14796ca2277", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:36:42.235810 containerd[1454]: 2026-03-12 01:36:42.170 [INFO][5814] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6" Mar 12 01:36:42.235810 containerd[1454]: 2026-03-12 01:36:42.170 [INFO][5814] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6" iface="eth0" netns="" Mar 12 01:36:42.235810 containerd[1454]: 2026-03-12 01:36:42.170 [INFO][5814] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6" Mar 12 01:36:42.235810 containerd[1454]: 2026-03-12 01:36:42.171 [INFO][5814] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6" Mar 12 01:36:42.235810 containerd[1454]: 2026-03-12 01:36:42.218 [INFO][5822] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6" HandleID="k8s-pod-network.de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6" Workload="localhost-k8s-coredns--66bc5c9577--977fw-eth0" Mar 12 01:36:42.235810 containerd[1454]: 2026-03-12 01:36:42.219 [INFO][5822] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:36:42.235810 containerd[1454]: 2026-03-12 01:36:42.219 [INFO][5822] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:36:42.235810 containerd[1454]: 2026-03-12 01:36:42.226 [WARNING][5822] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6" HandleID="k8s-pod-network.de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6" Workload="localhost-k8s-coredns--66bc5c9577--977fw-eth0" Mar 12 01:36:42.235810 containerd[1454]: 2026-03-12 01:36:42.226 [INFO][5822] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6" HandleID="k8s-pod-network.de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6" Workload="localhost-k8s-coredns--66bc5c9577--977fw-eth0" Mar 12 01:36:42.235810 containerd[1454]: 2026-03-12 01:36:42.229 [INFO][5822] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:36:42.235810 containerd[1454]: 2026-03-12 01:36:42.232 [INFO][5814] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6" Mar 12 01:36:42.235810 containerd[1454]: time="2026-03-12T01:36:42.235510132Z" level=info msg="TearDown network for sandbox \"de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6\" successfully" Mar 12 01:36:42.235810 containerd[1454]: time="2026-03-12T01:36:42.235531912Z" level=info msg="StopPodSandbox for \"de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6\" returns successfully" Mar 12 01:36:42.236712 containerd[1454]: time="2026-03-12T01:36:42.236233102Z" level=info msg="RemovePodSandbox for \"de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6\"" Mar 12 01:36:42.236712 containerd[1454]: time="2026-03-12T01:36:42.236335361Z" level=info msg="Forcibly stopping sandbox \"de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6\"" Mar 12 01:36:42.360902 containerd[1454]: 2026-03-12 01:36:42.305 [WARNING][5839] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--977fw-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"87896f19-89d0-488e-a664-14de866626f3", ResourceVersion:"1103", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 35, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"447057b40b704b8538d3d79fcba6a830ea9de45c4e2c071b9fde0880b7584b2a", Pod:"coredns-66bc5c9577-977fw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali14796ca2277", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:36:42.360902 containerd[1454]: 2026-03-12 01:36:42.305 [INFO][5839] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6" Mar 12 01:36:42.360902 containerd[1454]: 2026-03-12 01:36:42.305 [INFO][5839] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6" iface="eth0" netns="" Mar 12 01:36:42.360902 containerd[1454]: 2026-03-12 01:36:42.305 [INFO][5839] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6" Mar 12 01:36:42.360902 containerd[1454]: 2026-03-12 01:36:42.305 [INFO][5839] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6" Mar 12 01:36:42.360902 containerd[1454]: 2026-03-12 01:36:42.341 [INFO][5852] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6" HandleID="k8s-pod-network.de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6" Workload="localhost-k8s-coredns--66bc5c9577--977fw-eth0" Mar 12 01:36:42.360902 containerd[1454]: 2026-03-12 01:36:42.341 [INFO][5852] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:36:42.360902 containerd[1454]: 2026-03-12 01:36:42.341 [INFO][5852] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:36:42.360902 containerd[1454]: 2026-03-12 01:36:42.349 [WARNING][5852] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6" HandleID="k8s-pod-network.de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6" Workload="localhost-k8s-coredns--66bc5c9577--977fw-eth0" Mar 12 01:36:42.360902 containerd[1454]: 2026-03-12 01:36:42.349 [INFO][5852] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6" HandleID="k8s-pod-network.de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6" Workload="localhost-k8s-coredns--66bc5c9577--977fw-eth0" Mar 12 01:36:42.360902 containerd[1454]: 2026-03-12 01:36:42.351 [INFO][5852] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:36:42.360902 containerd[1454]: 2026-03-12 01:36:42.355 [INFO][5839] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6" Mar 12 01:36:42.360902 containerd[1454]: time="2026-03-12T01:36:42.360755199Z" level=info msg="TearDown network for sandbox \"de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6\" successfully" Mar 12 01:36:42.419202 containerd[1454]: time="2026-03-12T01:36:42.419131728Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 01:36:42.419396 containerd[1454]: time="2026-03-12T01:36:42.419246930Z" level=info msg="RemovePodSandbox \"de0827ccff64f5a0635a10a793824256b90e1da4e059d6d90b918cc08ec01bb6\" returns successfully" Mar 12 01:36:42.420101 containerd[1454]: time="2026-03-12T01:36:42.420056726Z" level=info msg="StopPodSandbox for \"b3dfbc640779c49c5300f08225213822843925be5961af2d0163c0588b789640\"" Mar 12 01:36:42.542779 containerd[1454]: 2026-03-12 01:36:42.490 [WARNING][5869] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b3dfbc640779c49c5300f08225213822843925be5961af2d0163c0588b789640" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--bc54q-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"503f8cf5-b92a-411c-8353-481b71d6c97f", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 35, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5c667132012003111e58065fe5bd1d8f5a7e45cda277951a6bcb8d0fe4ebc79e", Pod:"coredns-66bc5c9577-bc54q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1fdfb5df2e4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:36:42.542779 containerd[1454]: 2026-03-12 01:36:42.490 [INFO][5869] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b3dfbc640779c49c5300f08225213822843925be5961af2d0163c0588b789640" Mar 12 01:36:42.542779 containerd[1454]: 2026-03-12 01:36:42.490 [INFO][5869] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b3dfbc640779c49c5300f08225213822843925be5961af2d0163c0588b789640" iface="eth0" netns="" Mar 12 01:36:42.542779 containerd[1454]: 2026-03-12 01:36:42.490 [INFO][5869] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b3dfbc640779c49c5300f08225213822843925be5961af2d0163c0588b789640" Mar 12 01:36:42.542779 containerd[1454]: 2026-03-12 01:36:42.490 [INFO][5869] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b3dfbc640779c49c5300f08225213822843925be5961af2d0163c0588b789640" Mar 12 01:36:42.542779 containerd[1454]: 2026-03-12 01:36:42.525 [INFO][5877] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b3dfbc640779c49c5300f08225213822843925be5961af2d0163c0588b789640" HandleID="k8s-pod-network.b3dfbc640779c49c5300f08225213822843925be5961af2d0163c0588b789640" Workload="localhost-k8s-coredns--66bc5c9577--bc54q-eth0" Mar 12 01:36:42.542779 containerd[1454]: 2026-03-12 01:36:42.525 [INFO][5877] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:36:42.542779 containerd[1454]: 2026-03-12 01:36:42.525 [INFO][5877] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:36:42.542779 containerd[1454]: 2026-03-12 01:36:42.533 [WARNING][5877] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b3dfbc640779c49c5300f08225213822843925be5961af2d0163c0588b789640" HandleID="k8s-pod-network.b3dfbc640779c49c5300f08225213822843925be5961af2d0163c0588b789640" Workload="localhost-k8s-coredns--66bc5c9577--bc54q-eth0" Mar 12 01:36:42.542779 containerd[1454]: 2026-03-12 01:36:42.533 [INFO][5877] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b3dfbc640779c49c5300f08225213822843925be5961af2d0163c0588b789640" HandleID="k8s-pod-network.b3dfbc640779c49c5300f08225213822843925be5961af2d0163c0588b789640" Workload="localhost-k8s-coredns--66bc5c9577--bc54q-eth0" Mar 12 01:36:42.542779 containerd[1454]: 2026-03-12 01:36:42.536 [INFO][5877] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:36:42.542779 containerd[1454]: 2026-03-12 01:36:42.539 [INFO][5869] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b3dfbc640779c49c5300f08225213822843925be5961af2d0163c0588b789640" Mar 12 01:36:42.543778 containerd[1454]: time="2026-03-12T01:36:42.542831713Z" level=info msg="TearDown network for sandbox \"b3dfbc640779c49c5300f08225213822843925be5961af2d0163c0588b789640\" successfully" Mar 12 01:36:42.543778 containerd[1454]: time="2026-03-12T01:36:42.542862730Z" level=info msg="StopPodSandbox for \"b3dfbc640779c49c5300f08225213822843925be5961af2d0163c0588b789640\" returns successfully" Mar 12 01:36:42.543778 containerd[1454]: time="2026-03-12T01:36:42.543630199Z" level=info msg="RemovePodSandbox for \"b3dfbc640779c49c5300f08225213822843925be5961af2d0163c0588b789640\"" Mar 12 01:36:42.543778 containerd[1454]: time="2026-03-12T01:36:42.543666525Z" level=info msg="Forcibly stopping sandbox \"b3dfbc640779c49c5300f08225213822843925be5961af2d0163c0588b789640\"" Mar 12 01:36:42.661666 containerd[1454]: 2026-03-12 01:36:42.608 [WARNING][5894] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b3dfbc640779c49c5300f08225213822843925be5961af2d0163c0588b789640" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--bc54q-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"503f8cf5-b92a-411c-8353-481b71d6c97f", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 35, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5c667132012003111e58065fe5bd1d8f5a7e45cda277951a6bcb8d0fe4ebc79e", Pod:"coredns-66bc5c9577-bc54q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1fdfb5df2e4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:36:42.661666 containerd[1454]: 2026-03-12 01:36:42.609 [INFO][5894] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b3dfbc640779c49c5300f08225213822843925be5961af2d0163c0588b789640" Mar 12 01:36:42.661666 containerd[1454]: 2026-03-12 01:36:42.609 [INFO][5894] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b3dfbc640779c49c5300f08225213822843925be5961af2d0163c0588b789640" iface="eth0" netns="" Mar 12 01:36:42.661666 containerd[1454]: 2026-03-12 01:36:42.609 [INFO][5894] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b3dfbc640779c49c5300f08225213822843925be5961af2d0163c0588b789640" Mar 12 01:36:42.661666 containerd[1454]: 2026-03-12 01:36:42.609 [INFO][5894] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b3dfbc640779c49c5300f08225213822843925be5961af2d0163c0588b789640" Mar 12 01:36:42.661666 containerd[1454]: 2026-03-12 01:36:42.642 [INFO][5903] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b3dfbc640779c49c5300f08225213822843925be5961af2d0163c0588b789640" HandleID="k8s-pod-network.b3dfbc640779c49c5300f08225213822843925be5961af2d0163c0588b789640" Workload="localhost-k8s-coredns--66bc5c9577--bc54q-eth0" Mar 12 01:36:42.661666 containerd[1454]: 2026-03-12 01:36:42.642 [INFO][5903] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:36:42.661666 containerd[1454]: 2026-03-12 01:36:42.642 [INFO][5903] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:36:42.661666 containerd[1454]: 2026-03-12 01:36:42.649 [WARNING][5903] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b3dfbc640779c49c5300f08225213822843925be5961af2d0163c0588b789640" HandleID="k8s-pod-network.b3dfbc640779c49c5300f08225213822843925be5961af2d0163c0588b789640" Workload="localhost-k8s-coredns--66bc5c9577--bc54q-eth0" Mar 12 01:36:42.661666 containerd[1454]: 2026-03-12 01:36:42.649 [INFO][5903] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b3dfbc640779c49c5300f08225213822843925be5961af2d0163c0588b789640" HandleID="k8s-pod-network.b3dfbc640779c49c5300f08225213822843925be5961af2d0163c0588b789640" Workload="localhost-k8s-coredns--66bc5c9577--bc54q-eth0" Mar 12 01:36:42.661666 containerd[1454]: 2026-03-12 01:36:42.652 [INFO][5903] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:36:42.661666 containerd[1454]: 2026-03-12 01:36:42.655 [INFO][5894] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b3dfbc640779c49c5300f08225213822843925be5961af2d0163c0588b789640" Mar 12 01:36:42.670900 containerd[1454]: time="2026-03-12T01:36:42.662807017Z" level=info msg="TearDown network for sandbox \"b3dfbc640779c49c5300f08225213822843925be5961af2d0163c0588b789640\" successfully" Mar 12 01:36:42.680396 containerd[1454]: time="2026-03-12T01:36:42.680253199Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b3dfbc640779c49c5300f08225213822843925be5961af2d0163c0588b789640\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 01:36:42.680508 containerd[1454]: time="2026-03-12T01:36:42.680423323Z" level=info msg="RemovePodSandbox \"b3dfbc640779c49c5300f08225213822843925be5961af2d0163c0588b789640\" returns successfully" Mar 12 01:36:42.681352 containerd[1454]: time="2026-03-12T01:36:42.681309578Z" level=info msg="StopPodSandbox for \"5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d\"" Mar 12 01:36:42.799377 containerd[1454]: 2026-03-12 01:36:42.736 [WARNING][5920] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d" WorkloadEndpoint="localhost-k8s-whisker--58846f54f5--7rsmh-eth0" Mar 12 01:36:42.799377 containerd[1454]: 2026-03-12 01:36:42.737 [INFO][5920] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d" Mar 12 01:36:42.799377 containerd[1454]: 2026-03-12 01:36:42.737 [INFO][5920] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d" iface="eth0" netns="" Mar 12 01:36:42.799377 containerd[1454]: 2026-03-12 01:36:42.737 [INFO][5920] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d" Mar 12 01:36:42.799377 containerd[1454]: 2026-03-12 01:36:42.737 [INFO][5920] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d" Mar 12 01:36:42.799377 containerd[1454]: 2026-03-12 01:36:42.782 [INFO][5928] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d" HandleID="k8s-pod-network.5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d" Workload="localhost-k8s-whisker--58846f54f5--7rsmh-eth0" Mar 12 01:36:42.799377 containerd[1454]: 2026-03-12 01:36:42.782 [INFO][5928] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:36:42.799377 containerd[1454]: 2026-03-12 01:36:42.782 [INFO][5928] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:36:42.799377 containerd[1454]: 2026-03-12 01:36:42.790 [WARNING][5928] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d" HandleID="k8s-pod-network.5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d" Workload="localhost-k8s-whisker--58846f54f5--7rsmh-eth0" Mar 12 01:36:42.799377 containerd[1454]: 2026-03-12 01:36:42.790 [INFO][5928] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d" HandleID="k8s-pod-network.5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d" Workload="localhost-k8s-whisker--58846f54f5--7rsmh-eth0" Mar 12 01:36:42.799377 containerd[1454]: 2026-03-12 01:36:42.793 [INFO][5928] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:36:42.799377 containerd[1454]: 2026-03-12 01:36:42.796 [INFO][5920] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d" Mar 12 01:36:42.799377 containerd[1454]: time="2026-03-12T01:36:42.799319475Z" level=info msg="TearDown network for sandbox \"5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d\" successfully" Mar 12 01:36:42.799377 containerd[1454]: time="2026-03-12T01:36:42.799361252Z" level=info msg="StopPodSandbox for \"5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d\" returns successfully" Mar 12 01:36:42.800598 containerd[1454]: time="2026-03-12T01:36:42.800512939Z" level=info msg="RemovePodSandbox for \"5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d\"" Mar 12 01:36:42.800598 containerd[1454]: time="2026-03-12T01:36:42.800555738Z" level=info msg="Forcibly stopping sandbox \"5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d\"" Mar 12 01:36:42.910778 containerd[1454]: 2026-03-12 01:36:42.847 [WARNING][5945] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d" WorkloadEndpoint="localhost-k8s-whisker--58846f54f5--7rsmh-eth0" Mar 12 01:36:42.910778 containerd[1454]: 2026-03-12 01:36:42.847 [INFO][5945] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d" Mar 12 01:36:42.910778 containerd[1454]: 2026-03-12 01:36:42.847 [INFO][5945] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d" iface="eth0" netns="" Mar 12 01:36:42.910778 containerd[1454]: 2026-03-12 01:36:42.847 [INFO][5945] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d" Mar 12 01:36:42.910778 containerd[1454]: 2026-03-12 01:36:42.847 [INFO][5945] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d" Mar 12 01:36:42.910778 containerd[1454]: 2026-03-12 01:36:42.888 [INFO][5954] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d" HandleID="k8s-pod-network.5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d" Workload="localhost-k8s-whisker--58846f54f5--7rsmh-eth0" Mar 12 01:36:42.910778 containerd[1454]: 2026-03-12 01:36:42.888 [INFO][5954] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:36:42.910778 containerd[1454]: 2026-03-12 01:36:42.888 [INFO][5954] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:36:42.910778 containerd[1454]: 2026-03-12 01:36:42.900 [WARNING][5954] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d" HandleID="k8s-pod-network.5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d" Workload="localhost-k8s-whisker--58846f54f5--7rsmh-eth0" Mar 12 01:36:42.910778 containerd[1454]: 2026-03-12 01:36:42.900 [INFO][5954] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d" HandleID="k8s-pod-network.5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d" Workload="localhost-k8s-whisker--58846f54f5--7rsmh-eth0" Mar 12 01:36:42.910778 containerd[1454]: 2026-03-12 01:36:42.903 [INFO][5954] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:36:42.910778 containerd[1454]: 2026-03-12 01:36:42.906 [INFO][5945] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d" Mar 12 01:36:42.911875 containerd[1454]: time="2026-03-12T01:36:42.910822375Z" level=info msg="TearDown network for sandbox \"5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d\" successfully" Mar 12 01:36:42.916546 containerd[1454]: time="2026-03-12T01:36:42.916478816Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 01:36:42.916634 containerd[1454]: time="2026-03-12T01:36:42.916561909Z" level=info msg="RemovePodSandbox \"5f65fd75f214338d7f890656d31e9c3adf22d6c5fade76ffb4c22b9637852d7d\" returns successfully" Mar 12 01:36:42.917353 containerd[1454]: time="2026-03-12T01:36:42.917252915Z" level=info msg="StopPodSandbox for \"592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f\"" Mar 12 01:36:43.017869 containerd[1454]: 2026-03-12 01:36:42.974 [WARNING][5972] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f" WorkloadEndpoint="localhost-k8s-whisker--58846f54f5--7rsmh-eth0" Mar 12 01:36:43.017869 containerd[1454]: 2026-03-12 01:36:42.974 [INFO][5972] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f" Mar 12 01:36:43.017869 containerd[1454]: 2026-03-12 01:36:42.974 [INFO][5972] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f" iface="eth0" netns="" Mar 12 01:36:43.017869 containerd[1454]: 2026-03-12 01:36:42.974 [INFO][5972] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f" Mar 12 01:36:43.017869 containerd[1454]: 2026-03-12 01:36:42.974 [INFO][5972] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f" Mar 12 01:36:43.017869 containerd[1454]: 2026-03-12 01:36:43.001 [INFO][5981] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f" HandleID="k8s-pod-network.592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f" Workload="localhost-k8s-whisker--58846f54f5--7rsmh-eth0" Mar 12 01:36:43.017869 containerd[1454]: 2026-03-12 01:36:43.002 [INFO][5981] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:36:43.017869 containerd[1454]: 2026-03-12 01:36:43.002 [INFO][5981] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:36:43.017869 containerd[1454]: 2026-03-12 01:36:43.009 [WARNING][5981] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f" HandleID="k8s-pod-network.592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f" Workload="localhost-k8s-whisker--58846f54f5--7rsmh-eth0" Mar 12 01:36:43.017869 containerd[1454]: 2026-03-12 01:36:43.009 [INFO][5981] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f" HandleID="k8s-pod-network.592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f" Workload="localhost-k8s-whisker--58846f54f5--7rsmh-eth0" Mar 12 01:36:43.017869 containerd[1454]: 2026-03-12 01:36:43.011 [INFO][5981] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:36:43.017869 containerd[1454]: 2026-03-12 01:36:43.014 [INFO][5972] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f" Mar 12 01:36:43.017869 containerd[1454]: time="2026-03-12T01:36:43.017836163Z" level=info msg="TearDown network for sandbox \"592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f\" successfully" Mar 12 01:36:43.017869 containerd[1454]: time="2026-03-12T01:36:43.017858665Z" level=info msg="StopPodSandbox for \"592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f\" returns successfully" Mar 12 01:36:43.018685 containerd[1454]: time="2026-03-12T01:36:43.018642799Z" level=info msg="RemovePodSandbox for \"592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f\"" Mar 12 01:36:43.018804 containerd[1454]: time="2026-03-12T01:36:43.018694134Z" level=info msg="Forcibly stopping sandbox \"592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f\"" Mar 12 01:36:43.035354 kubelet[2528]: E0312 01:36:43.035241 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:36:43.133974 containerd[1454]: 2026-03-12 01:36:43.080 [WARNING][5998] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f" WorkloadEndpoint="localhost-k8s-whisker--58846f54f5--7rsmh-eth0" Mar 12 01:36:43.133974 containerd[1454]: 2026-03-12 01:36:43.080 [INFO][5998] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f" Mar 12 01:36:43.133974 containerd[1454]: 2026-03-12 01:36:43.080 [INFO][5998] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f" iface="eth0" netns="" Mar 12 01:36:43.133974 containerd[1454]: 2026-03-12 01:36:43.080 [INFO][5998] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f" Mar 12 01:36:43.133974 containerd[1454]: 2026-03-12 01:36:43.080 [INFO][5998] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f" Mar 12 01:36:43.133974 containerd[1454]: 2026-03-12 01:36:43.115 [INFO][6007] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f" HandleID="k8s-pod-network.592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f" Workload="localhost-k8s-whisker--58846f54f5--7rsmh-eth0" Mar 12 01:36:43.133974 containerd[1454]: 2026-03-12 01:36:43.115 [INFO][6007] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:36:43.133974 containerd[1454]: 2026-03-12 01:36:43.115 [INFO][6007] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:36:43.133974 containerd[1454]: 2026-03-12 01:36:43.123 [WARNING][6007] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f" HandleID="k8s-pod-network.592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f" Workload="localhost-k8s-whisker--58846f54f5--7rsmh-eth0" Mar 12 01:36:43.133974 containerd[1454]: 2026-03-12 01:36:43.123 [INFO][6007] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f" HandleID="k8s-pod-network.592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f" Workload="localhost-k8s-whisker--58846f54f5--7rsmh-eth0" Mar 12 01:36:43.133974 containerd[1454]: 2026-03-12 01:36:43.127 [INFO][6007] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:36:43.133974 containerd[1454]: 2026-03-12 01:36:43.130 [INFO][5998] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f" Mar 12 01:36:43.133974 containerd[1454]: time="2026-03-12T01:36:43.133331140Z" level=info msg="TearDown network for sandbox \"592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f\" successfully" Mar 12 01:36:43.140410 containerd[1454]: time="2026-03-12T01:36:43.140360462Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 01:36:43.140506 containerd[1454]: time="2026-03-12T01:36:43.140456960Z" level=info msg="RemovePodSandbox \"592ccc44d20e41ef45f3e58b44a1a555ed0c7d564934eb99b09ceb329ce84d6f\" returns successfully" Mar 12 01:36:43.141130 containerd[1454]: time="2026-03-12T01:36:43.141091027Z" level=info msg="StopPodSandbox for \"8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6\"" Mar 12 01:36:43.268111 containerd[1454]: 2026-03-12 01:36:43.202 [WARNING][6024] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7cf4cfd8c5--kpffb-eth0", GenerateName:"calico-kube-controllers-7cf4cfd8c5-", Namespace:"calico-system", SelfLink:"", UID:"dd73580a-b13b-41aa-8b3e-da326a7dc9c7", ResourceVersion:"1108", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 36, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cf4cfd8c5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"91fa645168cb0ee2a145b3a4044ea097e12f03d2ee22bf4f7ef26256923a9f4e", Pod:"calico-kube-controllers-7cf4cfd8c5-kpffb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicf153f298bc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:36:43.268111 containerd[1454]: 2026-03-12 01:36:43.202 [INFO][6024] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6" Mar 12 01:36:43.268111 containerd[1454]: 2026-03-12 01:36:43.202 [INFO][6024] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6" iface="eth0" netns="" Mar 12 01:36:43.268111 containerd[1454]: 2026-03-12 01:36:43.202 [INFO][6024] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6" Mar 12 01:36:43.268111 containerd[1454]: 2026-03-12 01:36:43.202 [INFO][6024] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6" Mar 12 01:36:43.268111 containerd[1454]: 2026-03-12 01:36:43.235 [INFO][6032] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6" HandleID="k8s-pod-network.8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6" Workload="localhost-k8s-calico--kube--controllers--7cf4cfd8c5--kpffb-eth0" Mar 12 01:36:43.268111 containerd[1454]: 2026-03-12 01:36:43.235 [INFO][6032] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:36:43.268111 containerd[1454]: 2026-03-12 01:36:43.235 [INFO][6032] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:36:43.268111 containerd[1454]: 2026-03-12 01:36:43.243 [WARNING][6032] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6" HandleID="k8s-pod-network.8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6" Workload="localhost-k8s-calico--kube--controllers--7cf4cfd8c5--kpffb-eth0" Mar 12 01:36:43.268111 containerd[1454]: 2026-03-12 01:36:43.243 [INFO][6032] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6" HandleID="k8s-pod-network.8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6" Workload="localhost-k8s-calico--kube--controllers--7cf4cfd8c5--kpffb-eth0" Mar 12 01:36:43.268111 containerd[1454]: 2026-03-12 01:36:43.254 [INFO][6032] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:36:43.268111 containerd[1454]: 2026-03-12 01:36:43.257 [INFO][6024] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6" Mar 12 01:36:43.268111 containerd[1454]: time="2026-03-12T01:36:43.268099693Z" level=info msg="TearDown network for sandbox \"8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6\" successfully" Mar 12 01:36:43.268863 containerd[1454]: time="2026-03-12T01:36:43.268132754Z" level=info msg="StopPodSandbox for \"8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6\" returns successfully" Mar 12 01:36:43.269167 containerd[1454]: time="2026-03-12T01:36:43.269109741Z" level=info msg="RemovePodSandbox for \"8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6\"" Mar 12 01:36:43.269372 containerd[1454]: time="2026-03-12T01:36:43.269174080Z" level=info msg="Forcibly stopping sandbox \"8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6\"" Mar 12 01:36:43.399457 containerd[1454]: 2026-03-12 01:36:43.336 [WARNING][6049] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7cf4cfd8c5--kpffb-eth0", GenerateName:"calico-kube-controllers-7cf4cfd8c5-", Namespace:"calico-system", SelfLink:"", UID:"dd73580a-b13b-41aa-8b3e-da326a7dc9c7", ResourceVersion:"1108", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 36, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cf4cfd8c5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"91fa645168cb0ee2a145b3a4044ea097e12f03d2ee22bf4f7ef26256923a9f4e", Pod:"calico-kube-controllers-7cf4cfd8c5-kpffb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicf153f298bc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:36:43.399457 containerd[1454]: 2026-03-12 01:36:43.336 [INFO][6049] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6" Mar 12 01:36:43.399457 containerd[1454]: 2026-03-12 01:36:43.336 [INFO][6049] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6" iface="eth0" netns="" Mar 12 01:36:43.399457 containerd[1454]: 2026-03-12 01:36:43.336 [INFO][6049] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6" Mar 12 01:36:43.399457 containerd[1454]: 2026-03-12 01:36:43.336 [INFO][6049] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6" Mar 12 01:36:43.399457 containerd[1454]: 2026-03-12 01:36:43.382 [INFO][6057] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6" HandleID="k8s-pod-network.8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6" Workload="localhost-k8s-calico--kube--controllers--7cf4cfd8c5--kpffb-eth0" Mar 12 01:36:43.399457 containerd[1454]: 2026-03-12 01:36:43.383 [INFO][6057] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:36:43.399457 containerd[1454]: 2026-03-12 01:36:43.383 [INFO][6057] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:36:43.399457 containerd[1454]: 2026-03-12 01:36:43.390 [WARNING][6057] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6" HandleID="k8s-pod-network.8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6" Workload="localhost-k8s-calico--kube--controllers--7cf4cfd8c5--kpffb-eth0" Mar 12 01:36:43.399457 containerd[1454]: 2026-03-12 01:36:43.390 [INFO][6057] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6" HandleID="k8s-pod-network.8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6" Workload="localhost-k8s-calico--kube--controllers--7cf4cfd8c5--kpffb-eth0" Mar 12 01:36:43.399457 containerd[1454]: 2026-03-12 01:36:43.392 [INFO][6057] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:36:43.399457 containerd[1454]: 2026-03-12 01:36:43.395 [INFO][6049] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6" Mar 12 01:36:43.399457 containerd[1454]: time="2026-03-12T01:36:43.398813598Z" level=info msg="TearDown network for sandbox \"8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6\" successfully" Mar 12 01:36:43.405429 containerd[1454]: time="2026-03-12T01:36:43.405335538Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 01:36:43.405429 containerd[1454]: time="2026-03-12T01:36:43.405422688Z" level=info msg="RemovePodSandbox \"8d78db05ec9ab975bd1a2b2fbce9b5c71193dc27e3fe0ff8d091f3d0ed144ae6\" returns successfully" Mar 12 01:36:43.406413 containerd[1454]: time="2026-03-12T01:36:43.406374144Z" level=info msg="StopPodSandbox for \"0d44238109b6659d2dfeaefc535a9a18f3ecd064866ff982bce1a51eff8076a8\"" Mar 12 01:36:43.510140 containerd[1454]: 2026-03-12 01:36:43.458 [WARNING][6074] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0d44238109b6659d2dfeaefc535a9a18f3ecd064866ff982bce1a51eff8076a8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fz78r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e41b2407-7aa6-4ede-8904-d6670e550c53", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 36, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"530f97b39787a0f72b711287f3ccfcf4d2c21bec69731ed9472b93d6cd8a35a0", Pod:"csi-node-driver-fz78r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9ccbd44d559", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:36:43.510140 containerd[1454]: 2026-03-12 01:36:43.458 [INFO][6074] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0d44238109b6659d2dfeaefc535a9a18f3ecd064866ff982bce1a51eff8076a8" Mar 12 01:36:43.510140 containerd[1454]: 2026-03-12 01:36:43.458 [INFO][6074] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0d44238109b6659d2dfeaefc535a9a18f3ecd064866ff982bce1a51eff8076a8" iface="eth0" netns="" Mar 12 01:36:43.510140 containerd[1454]: 2026-03-12 01:36:43.459 [INFO][6074] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0d44238109b6659d2dfeaefc535a9a18f3ecd064866ff982bce1a51eff8076a8" Mar 12 01:36:43.510140 containerd[1454]: 2026-03-12 01:36:43.459 [INFO][6074] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0d44238109b6659d2dfeaefc535a9a18f3ecd064866ff982bce1a51eff8076a8" Mar 12 01:36:43.510140 containerd[1454]: 2026-03-12 01:36:43.494 [INFO][6082] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0d44238109b6659d2dfeaefc535a9a18f3ecd064866ff982bce1a51eff8076a8" HandleID="k8s-pod-network.0d44238109b6659d2dfeaefc535a9a18f3ecd064866ff982bce1a51eff8076a8" Workload="localhost-k8s-csi--node--driver--fz78r-eth0" Mar 12 01:36:43.510140 containerd[1454]: 2026-03-12 01:36:43.494 [INFO][6082] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:36:43.510140 containerd[1454]: 2026-03-12 01:36:43.494 [INFO][6082] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:36:43.510140 containerd[1454]: 2026-03-12 01:36:43.501 [WARNING][6082] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0d44238109b6659d2dfeaefc535a9a18f3ecd064866ff982bce1a51eff8076a8" HandleID="k8s-pod-network.0d44238109b6659d2dfeaefc535a9a18f3ecd064866ff982bce1a51eff8076a8" Workload="localhost-k8s-csi--node--driver--fz78r-eth0" Mar 12 01:36:43.510140 containerd[1454]: 2026-03-12 01:36:43.502 [INFO][6082] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0d44238109b6659d2dfeaefc535a9a18f3ecd064866ff982bce1a51eff8076a8" HandleID="k8s-pod-network.0d44238109b6659d2dfeaefc535a9a18f3ecd064866ff982bce1a51eff8076a8" Workload="localhost-k8s-csi--node--driver--fz78r-eth0" Mar 12 01:36:43.510140 containerd[1454]: 2026-03-12 01:36:43.504 [INFO][6082] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:36:43.510140 containerd[1454]: 2026-03-12 01:36:43.507 [INFO][6074] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0d44238109b6659d2dfeaefc535a9a18f3ecd064866ff982bce1a51eff8076a8" Mar 12 01:36:43.510662 containerd[1454]: time="2026-03-12T01:36:43.510170270Z" level=info msg="TearDown network for sandbox \"0d44238109b6659d2dfeaefc535a9a18f3ecd064866ff982bce1a51eff8076a8\" successfully" Mar 12 01:36:43.510662 containerd[1454]: time="2026-03-12T01:36:43.510194906Z" level=info msg="StopPodSandbox for \"0d44238109b6659d2dfeaefc535a9a18f3ecd064866ff982bce1a51eff8076a8\" returns successfully" Mar 12 01:36:43.510799 containerd[1454]: time="2026-03-12T01:36:43.510762637Z" level=info msg="RemovePodSandbox for \"0d44238109b6659d2dfeaefc535a9a18f3ecd064866ff982bce1a51eff8076a8\"" Mar 12 01:36:43.510836 containerd[1454]: time="2026-03-12T01:36:43.510807870Z" level=info msg="Forcibly stopping sandbox \"0d44238109b6659d2dfeaefc535a9a18f3ecd064866ff982bce1a51eff8076a8\"" Mar 12 01:36:43.612954 containerd[1454]: 2026-03-12 01:36:43.557 [WARNING][6099] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0d44238109b6659d2dfeaefc535a9a18f3ecd064866ff982bce1a51eff8076a8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fz78r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e41b2407-7aa6-4ede-8904-d6670e550c53", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 36, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"530f97b39787a0f72b711287f3ccfcf4d2c21bec69731ed9472b93d6cd8a35a0", Pod:"csi-node-driver-fz78r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9ccbd44d559", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:36:43.612954 containerd[1454]: 2026-03-12 01:36:43.557 [INFO][6099] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0d44238109b6659d2dfeaefc535a9a18f3ecd064866ff982bce1a51eff8076a8" Mar 12 01:36:43.612954 containerd[1454]: 2026-03-12 01:36:43.557 [INFO][6099] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0d44238109b6659d2dfeaefc535a9a18f3ecd064866ff982bce1a51eff8076a8" iface="eth0" netns="" Mar 12 01:36:43.612954 containerd[1454]: 2026-03-12 01:36:43.558 [INFO][6099] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0d44238109b6659d2dfeaefc535a9a18f3ecd064866ff982bce1a51eff8076a8" Mar 12 01:36:43.612954 containerd[1454]: 2026-03-12 01:36:43.558 [INFO][6099] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0d44238109b6659d2dfeaefc535a9a18f3ecd064866ff982bce1a51eff8076a8" Mar 12 01:36:43.612954 containerd[1454]: 2026-03-12 01:36:43.594 [INFO][6107] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0d44238109b6659d2dfeaefc535a9a18f3ecd064866ff982bce1a51eff8076a8" HandleID="k8s-pod-network.0d44238109b6659d2dfeaefc535a9a18f3ecd064866ff982bce1a51eff8076a8" Workload="localhost-k8s-csi--node--driver--fz78r-eth0" Mar 12 01:36:43.612954 containerd[1454]: 2026-03-12 01:36:43.594 [INFO][6107] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:36:43.612954 containerd[1454]: 2026-03-12 01:36:43.594 [INFO][6107] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:36:43.612954 containerd[1454]: 2026-03-12 01:36:43.603 [WARNING][6107] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0d44238109b6659d2dfeaefc535a9a18f3ecd064866ff982bce1a51eff8076a8" HandleID="k8s-pod-network.0d44238109b6659d2dfeaefc535a9a18f3ecd064866ff982bce1a51eff8076a8" Workload="localhost-k8s-csi--node--driver--fz78r-eth0" Mar 12 01:36:43.612954 containerd[1454]: 2026-03-12 01:36:43.604 [INFO][6107] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0d44238109b6659d2dfeaefc535a9a18f3ecd064866ff982bce1a51eff8076a8" HandleID="k8s-pod-network.0d44238109b6659d2dfeaefc535a9a18f3ecd064866ff982bce1a51eff8076a8" Workload="localhost-k8s-csi--node--driver--fz78r-eth0" Mar 12 01:36:43.612954 containerd[1454]: 2026-03-12 01:36:43.606 [INFO][6107] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:36:43.612954 containerd[1454]: 2026-03-12 01:36:43.609 [INFO][6099] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0d44238109b6659d2dfeaefc535a9a18f3ecd064866ff982bce1a51eff8076a8" Mar 12 01:36:43.613613 containerd[1454]: time="2026-03-12T01:36:43.613037614Z" level=info msg="TearDown network for sandbox \"0d44238109b6659d2dfeaefc535a9a18f3ecd064866ff982bce1a51eff8076a8\" successfully" Mar 12 01:36:43.618534 containerd[1454]: time="2026-03-12T01:36:43.618422560Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0d44238109b6659d2dfeaefc535a9a18f3ecd064866ff982bce1a51eff8076a8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 01:36:43.618772 containerd[1454]: time="2026-03-12T01:36:43.618511504Z" level=info msg="RemovePodSandbox \"0d44238109b6659d2dfeaefc535a9a18f3ecd064866ff982bce1a51eff8076a8\" returns successfully" Mar 12 01:36:43.619443 containerd[1454]: time="2026-03-12T01:36:43.619398559Z" level=info msg="StopPodSandbox for \"3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765\"" Mar 12 01:36:43.734759 containerd[1454]: 2026-03-12 01:36:43.678 [WARNING][6125] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5fcfc6547b--f9sbm-eth0", GenerateName:"calico-apiserver-5fcfc6547b-", Namespace:"calico-system", SelfLink:"", UID:"cc45d57a-2d65-4471-a944-16cc99da2325", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 36, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fcfc6547b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"59341accd06ba9c99c42a5d457a67fff505d32fcc60ef1c1472d800926535e48", Pod:"calico-apiserver-5fcfc6547b-f9sbm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif08faad35fc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:36:43.734759 containerd[1454]: 2026-03-12 01:36:43.679 [INFO][6125] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765" Mar 12 01:36:43.734759 containerd[1454]: 2026-03-12 01:36:43.679 [INFO][6125] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765" iface="eth0" netns="" Mar 12 01:36:43.734759 containerd[1454]: 2026-03-12 01:36:43.679 [INFO][6125] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765" Mar 12 01:36:43.734759 containerd[1454]: 2026-03-12 01:36:43.679 [INFO][6125] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765" Mar 12 01:36:43.734759 containerd[1454]: 2026-03-12 01:36:43.713 [INFO][6133] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765" HandleID="k8s-pod-network.3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765" Workload="localhost-k8s-calico--apiserver--5fcfc6547b--f9sbm-eth0" Mar 12 01:36:43.734759 containerd[1454]: 2026-03-12 01:36:43.713 [INFO][6133] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:36:43.734759 containerd[1454]: 2026-03-12 01:36:43.713 [INFO][6133] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:36:43.734759 containerd[1454]: 2026-03-12 01:36:43.725 [WARNING][6133] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765" HandleID="k8s-pod-network.3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765" Workload="localhost-k8s-calico--apiserver--5fcfc6547b--f9sbm-eth0" Mar 12 01:36:43.734759 containerd[1454]: 2026-03-12 01:36:43.725 [INFO][6133] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765" HandleID="k8s-pod-network.3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765" Workload="localhost-k8s-calico--apiserver--5fcfc6547b--f9sbm-eth0" Mar 12 01:36:43.734759 containerd[1454]: 2026-03-12 01:36:43.727 [INFO][6133] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:36:43.734759 containerd[1454]: 2026-03-12 01:36:43.731 [INFO][6125] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765" Mar 12 01:36:43.734759 containerd[1454]: time="2026-03-12T01:36:43.734669596Z" level=info msg="TearDown network for sandbox \"3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765\" successfully" Mar 12 01:36:43.734759 containerd[1454]: time="2026-03-12T01:36:43.734704391Z" level=info msg="StopPodSandbox for \"3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765\" returns successfully" Mar 12 01:36:43.736201 containerd[1454]: time="2026-03-12T01:36:43.735568347Z" level=info msg="RemovePodSandbox for \"3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765\"" Mar 12 01:36:43.736201 containerd[1454]: time="2026-03-12T01:36:43.735604624Z" level=info msg="Forcibly stopping sandbox \"3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765\"" Mar 12 01:36:43.836617 containerd[1454]: 2026-03-12 01:36:43.789 [WARNING][6150] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5fcfc6547b--f9sbm-eth0", GenerateName:"calico-apiserver-5fcfc6547b-", Namespace:"calico-system", SelfLink:"", UID:"cc45d57a-2d65-4471-a944-16cc99da2325", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 36, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fcfc6547b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"59341accd06ba9c99c42a5d457a67fff505d32fcc60ef1c1472d800926535e48", Pod:"calico-apiserver-5fcfc6547b-f9sbm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif08faad35fc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:36:43.836617 containerd[1454]: 2026-03-12 01:36:43.790 [INFO][6150] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765" Mar 12 01:36:43.836617 containerd[1454]: 2026-03-12 01:36:43.790 [INFO][6150] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765" iface="eth0" netns="" Mar 12 01:36:43.836617 containerd[1454]: 2026-03-12 01:36:43.790 [INFO][6150] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765" Mar 12 01:36:43.836617 containerd[1454]: 2026-03-12 01:36:43.790 [INFO][6150] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765" Mar 12 01:36:43.836617 containerd[1454]: 2026-03-12 01:36:43.821 [INFO][6158] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765" HandleID="k8s-pod-network.3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765" Workload="localhost-k8s-calico--apiserver--5fcfc6547b--f9sbm-eth0" Mar 12 01:36:43.836617 containerd[1454]: 2026-03-12 01:36:43.821 [INFO][6158] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:36:43.836617 containerd[1454]: 2026-03-12 01:36:43.821 [INFO][6158] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:36:43.836617 containerd[1454]: 2026-03-12 01:36:43.828 [WARNING][6158] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765" HandleID="k8s-pod-network.3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765" Workload="localhost-k8s-calico--apiserver--5fcfc6547b--f9sbm-eth0" Mar 12 01:36:43.836617 containerd[1454]: 2026-03-12 01:36:43.828 [INFO][6158] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765" HandleID="k8s-pod-network.3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765" Workload="localhost-k8s-calico--apiserver--5fcfc6547b--f9sbm-eth0" Mar 12 01:36:43.836617 containerd[1454]: 2026-03-12 01:36:43.830 [INFO][6158] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:36:43.836617 containerd[1454]: 2026-03-12 01:36:43.833 [INFO][6150] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765" Mar 12 01:36:43.837600 containerd[1454]: time="2026-03-12T01:36:43.836640623Z" level=info msg="TearDown network for sandbox \"3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765\" successfully" Mar 12 01:36:43.856546 containerd[1454]: time="2026-03-12T01:36:43.856421668Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 01:36:43.856684 containerd[1454]: time="2026-03-12T01:36:43.856573979Z" level=info msg="RemovePodSandbox \"3df687bb319a19ebcc898bdb84509ef8faac8c37a68e6f1dba7594bbe40b9765\" returns successfully" Mar 12 01:36:43.857615 containerd[1454]: time="2026-03-12T01:36:43.857465547Z" level=info msg="StopPodSandbox for \"f1b8aeebbead1ba57c94a6c124e3eec467e0f5cde492582bce733c53c256b528\"" Mar 12 01:36:43.952073 containerd[1454]: 2026-03-12 01:36:43.903 [WARNING][6175] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f1b8aeebbead1ba57c94a6c124e3eec467e0f5cde492582bce733c53c256b528" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5fcfc6547b--67zph-eth0", GenerateName:"calico-apiserver-5fcfc6547b-", Namespace:"calico-system", SelfLink:"", UID:"f00bf4e6-320f-408f-93a5-5bdfb046e6a2", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 36, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fcfc6547b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"06d0310f452d54a9fdd1942c4cd9b0f904235f4a8be7077770ec7256a59cf346", Pod:"calico-apiserver-5fcfc6547b-67zph", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali3ae3384e84d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:36:43.952073 containerd[1454]: 2026-03-12 01:36:43.903 [INFO][6175] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f1b8aeebbead1ba57c94a6c124e3eec467e0f5cde492582bce733c53c256b528" Mar 12 01:36:43.952073 containerd[1454]: 2026-03-12 01:36:43.903 [INFO][6175] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f1b8aeebbead1ba57c94a6c124e3eec467e0f5cde492582bce733c53c256b528" iface="eth0" netns="" Mar 12 01:36:43.952073 containerd[1454]: 2026-03-12 01:36:43.904 [INFO][6175] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f1b8aeebbead1ba57c94a6c124e3eec467e0f5cde492582bce733c53c256b528" Mar 12 01:36:43.952073 containerd[1454]: 2026-03-12 01:36:43.904 [INFO][6175] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f1b8aeebbead1ba57c94a6c124e3eec467e0f5cde492582bce733c53c256b528" Mar 12 01:36:43.952073 containerd[1454]: 2026-03-12 01:36:43.933 [INFO][6183] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f1b8aeebbead1ba57c94a6c124e3eec467e0f5cde492582bce733c53c256b528" HandleID="k8s-pod-network.f1b8aeebbead1ba57c94a6c124e3eec467e0f5cde492582bce733c53c256b528" Workload="localhost-k8s-calico--apiserver--5fcfc6547b--67zph-eth0" Mar 12 01:36:43.952073 containerd[1454]: 2026-03-12 01:36:43.933 [INFO][6183] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:36:43.952073 containerd[1454]: 2026-03-12 01:36:43.933 [INFO][6183] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:36:43.952073 containerd[1454]: 2026-03-12 01:36:43.942 [WARNING][6183] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f1b8aeebbead1ba57c94a6c124e3eec467e0f5cde492582bce733c53c256b528" HandleID="k8s-pod-network.f1b8aeebbead1ba57c94a6c124e3eec467e0f5cde492582bce733c53c256b528" Workload="localhost-k8s-calico--apiserver--5fcfc6547b--67zph-eth0" Mar 12 01:36:43.952073 containerd[1454]: 2026-03-12 01:36:43.942 [INFO][6183] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f1b8aeebbead1ba57c94a6c124e3eec467e0f5cde492582bce733c53c256b528" HandleID="k8s-pod-network.f1b8aeebbead1ba57c94a6c124e3eec467e0f5cde492582bce733c53c256b528" Workload="localhost-k8s-calico--apiserver--5fcfc6547b--67zph-eth0" Mar 12 01:36:43.952073 containerd[1454]: 2026-03-12 01:36:43.944 [INFO][6183] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:36:43.952073 containerd[1454]: 2026-03-12 01:36:43.948 [INFO][6175] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f1b8aeebbead1ba57c94a6c124e3eec467e0f5cde492582bce733c53c256b528" Mar 12 01:36:43.952073 containerd[1454]: time="2026-03-12T01:36:43.952054608Z" level=info msg="TearDown network for sandbox \"f1b8aeebbead1ba57c94a6c124e3eec467e0f5cde492582bce733c53c256b528\" successfully" Mar 12 01:36:43.953138 containerd[1454]: time="2026-03-12T01:36:43.952086366Z" level=info msg="StopPodSandbox for \"f1b8aeebbead1ba57c94a6c124e3eec467e0f5cde492582bce733c53c256b528\" returns successfully" Mar 12 01:36:43.953138 containerd[1454]: time="2026-03-12T01:36:43.952884805Z" level=info msg="RemovePodSandbox for \"f1b8aeebbead1ba57c94a6c124e3eec467e0f5cde492582bce733c53c256b528\"" Mar 12 01:36:43.953138 containerd[1454]: time="2026-03-12T01:36:43.952933296Z" level=info msg="Forcibly stopping sandbox \"f1b8aeebbead1ba57c94a6c124e3eec467e0f5cde492582bce733c53c256b528\"" Mar 12 01:36:44.067440 containerd[1454]: 2026-03-12 01:36:44.010 [WARNING][6200] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f1b8aeebbead1ba57c94a6c124e3eec467e0f5cde492582bce733c53c256b528" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5fcfc6547b--67zph-eth0", GenerateName:"calico-apiserver-5fcfc6547b-", Namespace:"calico-system", SelfLink:"", UID:"f00bf4e6-320f-408f-93a5-5bdfb046e6a2", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 36, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fcfc6547b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"06d0310f452d54a9fdd1942c4cd9b0f904235f4a8be7077770ec7256a59cf346", Pod:"calico-apiserver-5fcfc6547b-67zph", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali3ae3384e84d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:36:44.067440 containerd[1454]: 2026-03-12 01:36:44.010 [INFO][6200] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f1b8aeebbead1ba57c94a6c124e3eec467e0f5cde492582bce733c53c256b528" Mar 12 01:36:44.067440 containerd[1454]: 2026-03-12 01:36:44.010 [INFO][6200] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f1b8aeebbead1ba57c94a6c124e3eec467e0f5cde492582bce733c53c256b528" iface="eth0" netns="" Mar 12 01:36:44.067440 containerd[1454]: 2026-03-12 01:36:44.010 [INFO][6200] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f1b8aeebbead1ba57c94a6c124e3eec467e0f5cde492582bce733c53c256b528" Mar 12 01:36:44.067440 containerd[1454]: 2026-03-12 01:36:44.010 [INFO][6200] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f1b8aeebbead1ba57c94a6c124e3eec467e0f5cde492582bce733c53c256b528" Mar 12 01:36:44.067440 containerd[1454]: 2026-03-12 01:36:44.048 [INFO][6208] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f1b8aeebbead1ba57c94a6c124e3eec467e0f5cde492582bce733c53c256b528" HandleID="k8s-pod-network.f1b8aeebbead1ba57c94a6c124e3eec467e0f5cde492582bce733c53c256b528" Workload="localhost-k8s-calico--apiserver--5fcfc6547b--67zph-eth0" Mar 12 01:36:44.067440 containerd[1454]: 2026-03-12 01:36:44.049 [INFO][6208] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:36:44.067440 containerd[1454]: 2026-03-12 01:36:44.050 [INFO][6208] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:36:44.067440 containerd[1454]: 2026-03-12 01:36:44.059 [WARNING][6208] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f1b8aeebbead1ba57c94a6c124e3eec467e0f5cde492582bce733c53c256b528" HandleID="k8s-pod-network.f1b8aeebbead1ba57c94a6c124e3eec467e0f5cde492582bce733c53c256b528" Workload="localhost-k8s-calico--apiserver--5fcfc6547b--67zph-eth0" Mar 12 01:36:44.067440 containerd[1454]: 2026-03-12 01:36:44.059 [INFO][6208] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f1b8aeebbead1ba57c94a6c124e3eec467e0f5cde492582bce733c53c256b528" HandleID="k8s-pod-network.f1b8aeebbead1ba57c94a6c124e3eec467e0f5cde492582bce733c53c256b528" Workload="localhost-k8s-calico--apiserver--5fcfc6547b--67zph-eth0" Mar 12 01:36:44.067440 containerd[1454]: 2026-03-12 01:36:44.061 [INFO][6208] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:36:44.067440 containerd[1454]: 2026-03-12 01:36:44.064 [INFO][6200] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f1b8aeebbead1ba57c94a6c124e3eec467e0f5cde492582bce733c53c256b528" Mar 12 01:36:44.068573 containerd[1454]: time="2026-03-12T01:36:44.067448357Z" level=info msg="TearDown network for sandbox \"f1b8aeebbead1ba57c94a6c124e3eec467e0f5cde492582bce733c53c256b528\" successfully" Mar 12 01:36:44.071994 containerd[1454]: time="2026-03-12T01:36:44.071928296Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f1b8aeebbead1ba57c94a6c124e3eec467e0f5cde492582bce733c53c256b528\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 01:36:44.072039 containerd[1454]: time="2026-03-12T01:36:44.072008074Z" level=info msg="RemovePodSandbox \"f1b8aeebbead1ba57c94a6c124e3eec467e0f5cde492582bce733c53c256b528\" returns successfully" Mar 12 01:36:46.006783 systemd[1]: Started sshd@7-10.0.0.111:22-10.0.0.1:54334.service - OpenSSH per-connection server daemon (10.0.0.1:54334). Mar 12 01:36:46.058240 sshd[6233]: Accepted publickey for core from 10.0.0.1 port 54334 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:36:46.060132 sshd[6233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:36:46.065463 systemd-logind[1441]: New session 8 of user core. Mar 12 01:36:46.072508 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 12 01:36:46.466861 sshd[6233]: pam_unix(sshd:session): session closed for user core Mar 12 01:36:46.473189 systemd[1]: sshd@7-10.0.0.111:22-10.0.0.1:54334.service: Deactivated successfully. Mar 12 01:36:46.476550 systemd[1]: session-8.scope: Deactivated successfully. Mar 12 01:36:46.477894 systemd-logind[1441]: Session 8 logged out. Waiting for processes to exit. Mar 12 01:36:46.479876 systemd-logind[1441]: Removed session 8. Mar 12 01:36:47.348509 kubelet[2528]: I0312 01:36:47.348423 2528 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 01:36:51.491789 systemd[1]: Started sshd@8-10.0.0.111:22-10.0.0.1:37716.service - OpenSSH per-connection server daemon (10.0.0.1:37716). Mar 12 01:36:51.561078 sshd[6274]: Accepted publickey for core from 10.0.0.1 port 37716 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:36:51.563749 sshd[6274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:36:51.572579 systemd-logind[1441]: New session 9 of user core. Mar 12 01:36:51.580632 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 12 01:36:51.765953 sshd[6274]: pam_unix(sshd:session): session closed for user core Mar 12 01:36:51.774039 systemd[1]: sshd@8-10.0.0.111:22-10.0.0.1:37716.service: Deactivated successfully. Mar 12 01:36:51.779450 systemd[1]: session-9.scope: Deactivated successfully. Mar 12 01:36:51.781255 systemd-logind[1441]: Session 9 logged out. Waiting for processes to exit. Mar 12 01:36:51.782877 systemd-logind[1441]: Removed session 9. Mar 12 01:36:56.821939 systemd[1]: Started sshd@9-10.0.0.111:22-10.0.0.1:37718.service - OpenSSH per-connection server daemon (10.0.0.1:37718). Mar 12 01:36:57.007560 sshd[6360]: Accepted publickey for core from 10.0.0.1 port 37718 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:36:57.010484 sshd[6360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:36:57.031758 systemd-logind[1441]: New session 10 of user core. Mar 12 01:36:57.041537 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 12 01:36:57.380386 sshd[6360]: pam_unix(sshd:session): session closed for user core Mar 12 01:36:57.388353 systemd[1]: sshd@9-10.0.0.111:22-10.0.0.1:37718.service: Deactivated successfully. Mar 12 01:36:57.395235 systemd[1]: session-10.scope: Deactivated successfully. Mar 12 01:36:57.398237 systemd-logind[1441]: Session 10 logged out. Waiting for processes to exit. Mar 12 01:36:57.403203 systemd-logind[1441]: Removed session 10. Mar 12 01:37:02.401495 systemd[1]: Started sshd@10-10.0.0.111:22-10.0.0.1:40732.service - OpenSSH per-connection server daemon (10.0.0.1:40732). Mar 12 01:37:02.540970 sshd[6381]: Accepted publickey for core from 10.0.0.1 port 40732 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:37:02.546647 sshd[6381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:37:02.747195 systemd-logind[1441]: New session 11 of user core. Mar 12 01:37:02.774033 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 12 01:37:03.177602 sshd[6381]: pam_unix(sshd:session): session closed for user core Mar 12 01:37:03.191508 systemd[1]: sshd@10-10.0.0.111:22-10.0.0.1:40732.service: Deactivated successfully. Mar 12 01:37:03.196027 systemd[1]: session-11.scope: Deactivated successfully. Mar 12 01:37:03.198334 systemd-logind[1441]: Session 11 logged out. Waiting for processes to exit. Mar 12 01:37:03.201129 systemd-logind[1441]: Removed session 11. Mar 12 01:37:06.896907 kubelet[2528]: E0312 01:37:06.896826 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:08.202244 systemd[1]: Started sshd@11-10.0.0.111:22-10.0.0.1:40746.service - OpenSSH per-connection server daemon (10.0.0.1:40746). Mar 12 01:37:08.306394 sshd[6425]: Accepted publickey for core from 10.0.0.1 port 40746 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:37:08.311041 sshd[6425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:37:08.320331 systemd-logind[1441]: New session 12 of user core. Mar 12 01:37:08.338567 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 12 01:37:08.518528 sshd[6425]: pam_unix(sshd:session): session closed for user core Mar 12 01:37:08.524548 systemd[1]: sshd@11-10.0.0.111:22-10.0.0.1:40746.service: Deactivated successfully. Mar 12 01:37:08.527640 systemd[1]: session-12.scope: Deactivated successfully. Mar 12 01:37:08.528721 systemd-logind[1441]: Session 12 logged out. Waiting for processes to exit. Mar 12 01:37:08.530574 systemd-logind[1441]: Removed session 12. Mar 12 01:37:13.530868 systemd[1]: Started sshd@12-10.0.0.111:22-10.0.0.1:42870.service - OpenSSH per-connection server daemon (10.0.0.1:42870). Mar 12 01:37:13.574446 sshd[6472]: Accepted publickey for core from 10.0.0.1 port 42870 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:37:13.576807 sshd[6472]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:37:13.582697 systemd-logind[1441]: New session 13 of user core. Mar 12 01:37:13.591465 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 12 01:37:13.745723 sshd[6472]: pam_unix(sshd:session): session closed for user core Mar 12 01:37:13.748684 systemd-logind[1441]: Session 13 logged out. Waiting for processes to exit. Mar 12 01:37:13.750192 systemd[1]: sshd@12-10.0.0.111:22-10.0.0.1:42870.service: Deactivated successfully. Mar 12 01:37:13.754017 systemd[1]: session-13.scope: Deactivated successfully. Mar 12 01:37:13.755821 systemd-logind[1441]: Removed session 13. Mar 12 01:37:13.897815 kubelet[2528]: E0312 01:37:13.897735 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:16.896161 kubelet[2528]: E0312 01:37:16.896038 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:18.760916 systemd[1]: Started sshd@13-10.0.0.111:22-10.0.0.1:42880.service - OpenSSH per-connection server daemon (10.0.0.1:42880). Mar 12 01:37:18.843116 sshd[6509]: Accepted publickey for core from 10.0.0.1 port 42880 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:37:18.845773 sshd[6509]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:37:18.853407 systemd-logind[1441]: New session 14 of user core. Mar 12 01:37:18.860529 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 12 01:37:18.896571 kubelet[2528]: E0312 01:37:18.896485 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:19.045886 sshd[6509]: pam_unix(sshd:session): session closed for user core Mar 12 01:37:19.058971 systemd[1]: sshd@13-10.0.0.111:22-10.0.0.1:42880.service: Deactivated successfully. Mar 12 01:37:19.060992 systemd[1]: session-14.scope: Deactivated successfully. Mar 12 01:37:19.062858 systemd-logind[1441]: Session 14 logged out. Waiting for processes to exit. Mar 12 01:37:19.074475 systemd[1]: Started sshd@14-10.0.0.111:22-10.0.0.1:42884.service - OpenSSH per-connection server daemon (10.0.0.1:42884). Mar 12 01:37:19.076640 systemd-logind[1441]: Removed session 14. Mar 12 01:37:19.115552 sshd[6524]: Accepted publickey for core from 10.0.0.1 port 42884 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:37:19.117952 sshd[6524]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:37:19.123936 systemd-logind[1441]: New session 15 of user core. Mar 12 01:37:19.135548 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 12 01:37:19.337335 sshd[6524]: pam_unix(sshd:session): session closed for user core Mar 12 01:37:19.348568 systemd[1]: sshd@14-10.0.0.111:22-10.0.0.1:42884.service: Deactivated successfully. Mar 12 01:37:19.350956 systemd[1]: session-15.scope: Deactivated successfully. Mar 12 01:37:19.355535 systemd-logind[1441]: Session 15 logged out. Waiting for processes to exit. Mar 12 01:37:19.367747 systemd[1]: Started sshd@15-10.0.0.111:22-10.0.0.1:42898.service - OpenSSH per-connection server daemon (10.0.0.1:42898). Mar 12 01:37:19.369939 systemd-logind[1441]: Removed session 15. Mar 12 01:37:19.411542 sshd[6536]: Accepted publickey for core from 10.0.0.1 port 42898 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:37:19.414254 sshd[6536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:37:19.420839 systemd-logind[1441]: New session 16 of user core. Mar 12 01:37:19.426480 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 12 01:37:19.579752 sshd[6536]: pam_unix(sshd:session): session closed for user core Mar 12 01:37:19.586409 systemd[1]: sshd@15-10.0.0.111:22-10.0.0.1:42898.service: Deactivated successfully. Mar 12 01:37:19.595782 systemd[1]: session-16.scope: Deactivated successfully. Mar 12 01:37:19.597555 systemd-logind[1441]: Session 16 logged out. Waiting for processes to exit. Mar 12 01:37:19.598984 systemd-logind[1441]: Removed session 16. Mar 12 01:37:21.325185 kubelet[2528]: I0312 01:37:21.325093 2528 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 01:37:24.592705 systemd[1]: Started sshd@16-10.0.0.111:22-10.0.0.1:43754.service - OpenSSH per-connection server daemon (10.0.0.1:43754). Mar 12 01:37:24.631409 sshd[6559]: Accepted publickey for core from 10.0.0.1 port 43754 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:37:24.633709 sshd[6559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:37:24.640227 systemd-logind[1441]: New session 17 of user core. Mar 12 01:37:24.647551 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 12 01:37:24.788687 sshd[6559]: pam_unix(sshd:session): session closed for user core Mar 12 01:37:24.792065 systemd[1]: sshd@16-10.0.0.111:22-10.0.0.1:43754.service: Deactivated successfully. Mar 12 01:37:24.794818 systemd[1]: session-17.scope: Deactivated successfully. Mar 12 01:37:24.797043 systemd-logind[1441]: Session 17 logged out. Waiting for processes to exit. Mar 12 01:37:24.798460 systemd-logind[1441]: Removed session 17. Mar 12 01:37:29.802611 systemd[1]: Started sshd@17-10.0.0.111:22-10.0.0.1:43768.service - OpenSSH per-connection server daemon (10.0.0.1:43768). Mar 12 01:37:29.865997 sshd[6597]: Accepted publickey for core from 10.0.0.1 port 43768 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:37:29.868032 sshd[6597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:37:29.873699 systemd-logind[1441]: New session 18 of user core. Mar 12 01:37:29.883510 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 12 01:37:30.039371 sshd[6597]: pam_unix(sshd:session): session closed for user core Mar 12 01:37:30.053083 systemd[1]: sshd@17-10.0.0.111:22-10.0.0.1:43768.service: Deactivated successfully. Mar 12 01:37:30.056097 systemd[1]: session-18.scope: Deactivated successfully. Mar 12 01:37:30.058399 systemd-logind[1441]: Session 18 logged out. Waiting for processes to exit. Mar 12 01:37:30.069067 systemd[1]: Started sshd@18-10.0.0.111:22-10.0.0.1:51434.service - OpenSSH per-connection server daemon (10.0.0.1:51434). Mar 12 01:37:30.070660 systemd-logind[1441]: Removed session 18. Mar 12 01:37:30.103602 sshd[6611]: Accepted publickey for core from 10.0.0.1 port 51434 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:37:30.105694 sshd[6611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:37:30.164992 systemd-logind[1441]: New session 19 of user core. Mar 12 01:37:30.176703 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 12 01:37:30.597068 sshd[6611]: pam_unix(sshd:session): session closed for user core Mar 12 01:37:30.613025 systemd[1]: Started sshd@19-10.0.0.111:22-10.0.0.1:51448.service - OpenSSH per-connection server daemon (10.0.0.1:51448). Mar 12 01:37:30.613841 systemd[1]: sshd@18-10.0.0.111:22-10.0.0.1:51434.service: Deactivated successfully. Mar 12 01:37:30.616525 systemd[1]: session-19.scope: Deactivated successfully. Mar 12 01:37:30.621389 systemd-logind[1441]: Session 19 logged out. Waiting for processes to exit. Mar 12 01:37:30.623042 systemd-logind[1441]: Removed session 19. Mar 12 01:37:30.664772 sshd[6622]: Accepted publickey for core from 10.0.0.1 port 51448 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:37:30.666945 sshd[6622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:37:30.673984 systemd-logind[1441]: New session 20 of user core. Mar 12 01:37:30.683688 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 12 01:37:31.292438 sshd[6622]: pam_unix(sshd:session): session closed for user core Mar 12 01:37:31.304507 systemd[1]: sshd@19-10.0.0.111:22-10.0.0.1:51448.service: Deactivated successfully. Mar 12 01:37:31.309228 systemd[1]: session-20.scope: Deactivated successfully. Mar 12 01:37:31.314046 systemd-logind[1441]: Session 20 logged out. Waiting for processes to exit. Mar 12 01:37:31.328321 systemd[1]: Started sshd@20-10.0.0.111:22-10.0.0.1:51452.service - OpenSSH per-connection server daemon (10.0.0.1:51452). Mar 12 01:37:31.331763 systemd-logind[1441]: Removed session 20. Mar 12 01:37:31.364997 sshd[6650]: Accepted publickey for core from 10.0.0.1 port 51452 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:37:31.367348 sshd[6650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:37:31.375086 systemd-logind[1441]: New session 21 of user core. Mar 12 01:37:31.380489 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 12 01:37:31.745494 sshd[6650]: pam_unix(sshd:session): session closed for user core Mar 12 01:37:31.755941 systemd[1]: sshd@20-10.0.0.111:22-10.0.0.1:51452.service: Deactivated successfully. Mar 12 01:37:31.757863 systemd[1]: session-21.scope: Deactivated successfully. Mar 12 01:37:31.762893 systemd-logind[1441]: Session 21 logged out. Waiting for processes to exit. Mar 12 01:37:31.771673 systemd[1]: Started sshd@21-10.0.0.111:22-10.0.0.1:51466.service - OpenSSH per-connection server daemon (10.0.0.1:51466). Mar 12 01:37:31.773702 systemd-logind[1441]: Removed session 21. Mar 12 01:37:31.808167 sshd[6663]: Accepted publickey for core from 10.0.0.1 port 51466 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:37:31.810133 sshd[6663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:37:31.816423 systemd-logind[1441]: New session 22 of user core. Mar 12 01:37:31.827523 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 12 01:37:31.979972 sshd[6663]: pam_unix(sshd:session): session closed for user core Mar 12 01:37:31.986462 systemd[1]: sshd@21-10.0.0.111:22-10.0.0.1:51466.service: Deactivated successfully. Mar 12 01:37:31.989754 systemd[1]: session-22.scope: Deactivated successfully. Mar 12 01:37:31.991108 systemd-logind[1441]: Session 22 logged out. Waiting for processes to exit. Mar 12 01:37:31.993372 systemd-logind[1441]: Removed session 22. Mar 12 01:37:33.901756 kubelet[2528]: E0312 01:37:33.901214 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:36.993683 systemd[1]: Started sshd@22-10.0.0.111:22-10.0.0.1:51482.service - OpenSSH per-connection server daemon (10.0.0.1:51482). Mar 12 01:37:37.036580 sshd[6700]: Accepted publickey for core from 10.0.0.1 port 51482 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:37:37.039475 sshd[6700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:37:37.045810 systemd-logind[1441]: New session 23 of user core. Mar 12 01:37:37.054600 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 12 01:37:37.189967 sshd[6700]: pam_unix(sshd:session): session closed for user core Mar 12 01:37:37.195462 systemd[1]: sshd@22-10.0.0.111:22-10.0.0.1:51482.service: Deactivated successfully. Mar 12 01:37:37.197533 systemd[1]: session-23.scope: Deactivated successfully. Mar 12 01:37:37.198528 systemd-logind[1441]: Session 23 logged out. Waiting for processes to exit. Mar 12 01:37:37.200042 systemd-logind[1441]: Removed session 23. Mar 12 01:37:40.895947 kubelet[2528]: E0312 01:37:40.895872 2528 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:42.202695 systemd[1]: Started sshd@23-10.0.0.111:22-10.0.0.1:47264.service - OpenSSH per-connection server daemon (10.0.0.1:47264). Mar 12 01:37:42.241477 sshd[6738]: Accepted publickey for core from 10.0.0.1 port 47264 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:37:42.243182 sshd[6738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:37:42.248647 systemd-logind[1441]: New session 24 of user core. Mar 12 01:37:42.258649 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 12 01:37:42.384654 sshd[6738]: pam_unix(sshd:session): session closed for user core Mar 12 01:37:42.389323 systemd[1]: sshd@23-10.0.0.111:22-10.0.0.1:47264.service: Deactivated successfully. Mar 12 01:37:42.391789 systemd[1]: session-24.scope: Deactivated successfully. Mar 12 01:37:42.392857 systemd-logind[1441]: Session 24 logged out. Waiting for processes to exit. Mar 12 01:37:42.394125 systemd-logind[1441]: Removed session 24. Mar 12 01:37:47.402867 systemd[1]: Started sshd@24-10.0.0.111:22-10.0.0.1:47274.service - OpenSSH per-connection server daemon (10.0.0.1:47274). Mar 12 01:37:47.466333 sshd[6753]: Accepted publickey for core from 10.0.0.1 port 47274 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:37:47.468312 sshd[6753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:37:47.473661 systemd-logind[1441]: New session 25 of user core. Mar 12 01:37:47.487663 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 12 01:37:47.639946 sshd[6753]: pam_unix(sshd:session): session closed for user core Mar 12 01:37:47.646039 systemd[1]: sshd@24-10.0.0.111:22-10.0.0.1:47274.service: Deactivated successfully. Mar 12 01:37:47.648966 systemd[1]: session-25.scope: Deactivated successfully. Mar 12 01:37:47.650334 systemd-logind[1441]: Session 25 logged out. Waiting for processes to exit. Mar 12 01:37:47.652045 systemd-logind[1441]: Removed session 25.